Author Archives: Rob

Principles

The Principles behind the Agile Manifesto

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Business people and developers must work together daily throughout the project.
  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software is the primary measure of progress.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
  • Simplicity–the art of maximizing the amount of work not done–is essential.
  • The best architectures, requirements, and designs emerge from self-organizing teams.
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

The Five Principles of Lean

  1. Value – specify what creates value from the customer’s perspective.
  2. The value stream – identify all the steps along the process chain.
  3. Flow – make the value process flow.
  4. Pull – make only what is needed by the customer (short term response to the customer’s rate of demand).
  5. Perfection – strive for perfection by continually attempting to produce exactly what the customer wants.

The Seven Principles of Lean Software Development

  1. Eliminate waste
  2. Amplify learning
  3. Decide as late as possible
  4. Deliver as fast as possible
  5. Empower the team
  6. Build integrity in
  7. See the whole

The 4 Sections and the 14 Principles of the Toyota Way

I. Having a long-term philosophy that drives a long-term approach to building a learning organization

1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals

II. The right process will produce the right results

2. Create a continuous process flow to bring problems to the surface

3. Use “pull” systems to avoid overproduction

4. Level out the workload (heijunka). (Work like the tortoise, not the hare)

5. Build a culture of stopping to fix problems, to get quality right the first time

6. Standardized tasks and processes are the foundation for continuous improvement and employee empowerment

7. Use visual control so no problems are hidden

8. Use only reliable, thoroughly tested technology that serves your people and processes

III. Add value to the organization by developing its people and partners

9. Grow leaders who thoroughly understand the work, live the philosophy, and teach it to others

10. Develop exceptional people and teams who follow your company’s philosophy

11. Respect your extended network of partners and suppliers by challenging them and helping them improve

IV. Continuously solving root problems to drive organizational learning

12. Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu).

13. Make decisions slowly by consensus, thoroughly considering all options; implement decisions rapidly (Nemawashi).

14. Become a learning organization through relentless reflection (hansei) and continuous improvement (Кaizen).

Kanban is just a tool, so why is it being treated like a methodology?

I was throwing some shapes on Twitter recently about some concerns I have with the current Kanban craze. Unfortunately I think the cursed 140 character limit meant my points got misinterpreted and may have lead people to think I’m anti-Kanban which is not the case in fact it’s quite the opposite. I’ve been using Kanban boards for over a year and half and jointly ran a presentation at XPDay2008 on evolving from Scrum to Lean which focused heavily on the use of Kanban boards.

The thing that’s making me itchy is how Kanban has somehow been elevated into a methodology unto itself. We don’t have “Scrum and Sprint” conferences or XPandPairProgrammingDay so why do we have Lean Kanban Conference Miami or the UK Lean and Kanban Conference? Also, pretty much everywhere you see someone talking about Lean software development the title of the blog or presentation also includes Kanban in the same breath? More than that I see a lot of discussion around Kanban in blog posts and Twitter but very little on Lean or the Lean Software Development principles.

I’m sure proponents of Kanban will say no one is suggesting Kanban is a methodology and I would agree I’ve not seen anyone say it is. The problem is interpretation. People have a habit of focusing on rules and methodologies because they’re a lot more easy to tackle than the problems they we’re created to solve. Scrum has been enormously successful (if you consider wide adoption a measure of success) but very few teams are doing it well as James Shore has been writing eloquently about recently because it does not force you address the real issues. The beauty of Lean software development is it is just a set of principles. It intentionally avoids prescribing how to do something. Obviously this causes problems as most people don’t want to get involved in the difficult stuff they just want to be told how to do it. Consider this reply I received on Twitter:

erwilke @robbowley “I think we’re trying to avoid kanban being seen as a stand-alone methodology, but people don’t “get” it as a set of tools”.

Maybe there’s a good reason why people don’t get it – because it you need to understand where it’s coming from. Focusing on Kanban and ignoring all the rest however, that’s easy!

Elevating Kanban to the prominent position it is now in makes me feel like history is going to repeat itself. I prophesied this some months back. It has been the most popular post on my blog by a long way.

If you’re getting into Kanban, be warned. Kanban is just a tool and in my opinion no more important than say, pair programming, unit testing or domain driven development. It is certainly a lot less important than the white elephant in the room which very few people seem to be addressing which is building the right thing in the first place. As Peter Drucker famously said: “There is nothing so useless as doing efficiently that which should not be done at all”.

Kanban is a small part of something much, much bigger, see the whole.

 

*Edit* Some responses to this article:

Is Kanban Just a Tool? – David Anderson

It is Not What It is that Really Matters - Israel Gat

Kanban: It’s a Tool, and There’s No Such Thing as “Just” a Tool – Project Management Revolutionary

Depend in the direction of stability

As a general rule of thumb you should depend in the direction of stability (The Stable Dependencies Principle (SDP)). A package should only depend upon packages that are more stable than it is. If something is changing a lot it we should not depend on it. If it isn’t then we can comfortably reference it as a versioned package/assembly.

“But I want to add something to the GeneralFunctions/Domain/Utilities project”
Why? If you are the only one using it then there is no reason for it to be there.

“But someone else may want to use it in the future”
The possibility that someone may is not a good enough reason to put it there. Follow the principle of You Aren’t Gonna Need It (YAGNI). 

“But what if I need to change something that already exists?”
“Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification” – The Open Closed Principle

Creating a TeamCity currently failed builds page

Frustrated with seeing too many failed builds and wanting to make the issue more visible we are planning on putting an LCD monitor on the wall to display all the failing builds. However, unfortunately TeamCity does not have such a page available. There are various ways of receiving the status of builds such as through RSS feeds and the Status Widget, but the information is no more than you can see on the overview page and if you have a lot of builds it’s not easy to see where the problems are.

First, I experimented with using Linq to mine the RSS feeds and was getting somewhere but it wasn’t exactly a five minute job. Then Kirill Maximov, a JetBrains TeamCity developer responded to my cries of help on Twitter saying I could modify the External Status Widget and a whole world of fun was opened up to me.

How to create a failed builds page

  1. Create an html page that includes the External Status Widget as described here. By default this page will only show the current status of all projects as you see on the overview page, but we only want to know about failed builds.
  2. Add the wrapping <c:if test=”${not buildType.status.successful}”></c:if> to the file *TeamCity*/webapps/ROOT/status/externalStatus.jsp (it is probably worth making a backup of this file first).
  3. Thats it! However as you have now realised you can do a lot more customisation. If you have a root through some of the jsp files you’ll see there are a lot of features you can take advantage of to customise further.

You can download my html page and modifed externalStatus.jsp files here

Team principles

On my new team we’ve just started our first proper iteration. We’ve agreed to commit to the following principles:

  • No multi-tasking
  • All new or changed code must be thoroughly unit tested
  • No more than 2 pieces of work in “Active Work” at any time (our team is 3 developers)
  • We always work on the highest priority task
  • Our definition of done is “In UAT”
  • Leave it in a better condition than you found it
  • No hidden work
  • No overtime
  • No disruptions
  • Don’t SysTest your own work (we don’t have a tester yet)
Have you done something similar? What are your team’s principles?

TDD keeps you focused

One of the less stated benefits of doing TDD is that it concentrates your mind on the problem.

Recently I was looking at a section of code which was creating a Lucene search string from criteria passed in from the user. I needed to add some new functionality, but it was a bit higgledy-piggledy so following the broken windows principle I decided to refactor it to the builder pattern before I added my new feature.

I ran NCover on the code and it reported 70% coverage which considering what I knew about the history of the system was surprisingly high. I decided there was enough coverage, paired with a colleague and dived in. At first we were doing well. Our changes broke the tests a few times which reassured us about the coverage, but after a while it felt that we were losing a bit of steam. We’d done a lot of refactoring but it didn’t feel like we’d got very far.

Skip to a couple of days later – I’d not been able to touch the code and neither had my colleague who had now been taken off to deal with another project, so I pulled in the other dev on my team who is probably the most fanatical TDD nut I’ve ever met. Immediately he was uncomfortable, he’d quickly spotted the tests were really only integration tests (something I had put to the back of my mind) and got jittery as we continued from where I’d left off. I didn’t like to see him in so much pain so quickly relented and let him write new tests even though I felt it was just going to slow us down, maybe even to the point where we wouldn’t get it finished. However he assured me that although it felt slow now we’d be racing along soon enough.

Not only was he right about that, but as we were doing it the objective was so much clearer. Writing the new tests meant we were forced to really think about each move we made so our refactoring had clear direction. We spent about the same time working on the problem but probably made twice as much progress as my previous attempt and ended up with a proper unit test suite around the code.

Some day soon people are probably going to do away with the need to do TDD, which I think will be a shame as it never ceases to amaze me how many benefits it has.

Software is not like a house

“Building a house” is perhaps the most overused software analogy out there. Sure, there are many overlaps and it’s a concept that non-technical people can grasp easily when we need to explain something, but it simply doesn’t add up. I’ve ran into this analogy frequently in my current obsession with estimation (someone even used it to comment on one of my recent articles) so I’ve been compelled to take this on as well ;-). Below is is a typical example of an argument I hear all the time. It was posted in response to this article on InfoQ:

“So, you’re going to remodel your old house. It’s a two-story Victorian, and you want to knock out half of the back exterior wall, expand the kitchen, and a 3/4 bathroom, and put a new bedroom above all that on the second floor.

You call up a contractor, and she says that she won’t give you any estimates, but she will show clear progress on a weekly basis, and you’re free to kill the contract at the end of any given week if you’re not satisfied.

So you begin work. They blast out the back wall, frame up the new rooms, and they’re beginning work on the kitchen expansion, when you begin to realize that you’re burning through your savings faster than you expected, and you’re not sure if you’ll be able to complete the job as you’d like. In fact, you’re beginning to worry if you’ll be able to get the thing completed in any useful way at all. At this point, you ask the contractor for an estimate of the work remaining to be done, and she gleefully responds, ‘I don’t give estimates, but you can cancel at any time!'”

If a house was like software and I was a contractor given the above conditions this is what I would do: I would build something that was usable and fulfilled the absolute minimum requirements as soon as possible (e.g. a wooden shed extension with a camping stove), see what the customer thinks and then rebuild it, say, every two weeks adding more and more of the requirements and responding to their feedback as we go, making sure that at the end of every two weeks the customer still had something they could use. In the end we’d have fulfilled as many of the requirements possible given the allocated money and time frame. We wouldn’t have gone over budget, been able to respond to unexpected events (e.g. finding out the foundations are not sufficient) and the customer could change his mind (on windows, room layouts etc) as we go.

Crucially, what the author of the above comment fails to grasp is the difference between incremental and iterative development (see end of article). Developing software (notice how no one says “building” software) is not like building a house – we have the luxury that it’s relatively inexpensive (if we’ve followed good design principles) to go back and change it as often as we like. I say, tell me honestly how much you’ve got to spend/how much time you have and I’ll give you the best possible value for those conditions. If an unexpected event occurred which means we can’t deliver it, well, it would have happened if we’d estimated it anyway. At least my way (well, it’s not my way is it…) we’re more likely to find out sooner and hopefully save the customer some money/embarrassment.

If you’re going to argue with me (which I dearly hope you will) please do not use this tired analogy. At least now rather than engaging with you I can point you this way and I don’t have to repeat myself 🙂

Further reading:

Jeff Patton explains on his blog the important difference between iterative and incremental development. – an exert from an excellent presentation I saw him give at XPDay 2007
The Software Construction Analogy is Broken. A very thorough article posted on kuro5hin a while back.

How do you measure success?

In my last post, I railed against the list makers and pondered why people are still so keen to pursue such fine grained detail. I hinted at something I’ve been thinking about for a while, that it comes down to who your working for.

Probably the biggest benefit of working so intimately with our customer has been the amount of work we don’t undertake. The problem with this is it’s very difficult (read impossible) to measure. The customer may be very happy, but there’s nothing to show for it. In fact, you could argue this works against us as we’re doing ourselves out of a job.

Managers can only evaluate our productivity on the work we do. Where I work our project manager is required to issue a progress report every 2 weeks. It consists of our spending (which I feel is quite justified), a list of all the stories we’ve been working on, their status and our velocity (in story points). If something is taking a lot longer than expected she is required to explain why. A lot of time is spent analysing our progress this way, but very little time getting feedback from our customer (shouldn’t there be a customer happiness rating in this bi-weekly report?).

However, I have sympathy with my managers. It’s their job to report on our progress to their managers, justify our existence and protect us from people looking for excuses to cut budgets. This is where I feel the irrational desire for measurement emanates. Somewhere up the line someone who is too busy to get their hands dirty with the detail wants a really simple way of diagnosing the health of a project (“Costing 50K a month but only delivering 20 points? Hmm… clearly something wrong there.”). If the velocity suddenly drops, does this necessarily mean there’s something wrong? Unfortunately anecdotal evidence or gut feelings don’t translate very well to very hierarchical structures (which I think says more about the structure).

Imagine this scenario: A bank has a crucial trading system which requires no development work for the foreseeable future. You have a small team of domain expert developers who’ve worked on the system for a long time, but now having nothing to do (they’re not doing nothing though, being enthusiastic devs they’re investigating new technologies and better ways of doings things as well a spiking new ideas which could improve the system). A manager sees a report which shows they’re no adding “no value” and makes them all redundant. Very soon after there is a problem with the system, it falls over and all trading has to cease until its fixed. As all the domain experts are no longer there it takes a very long time and the bank loses tens of millions. If the manager had kept the team on they’d have fixed it within a fraction of the time and saved the bank a fortune. If, rather than looking at productivity reports, the manager had appreciated the value of the system to the customer I doubt he would have made the decision he did.

Customers couldn’t give a stuff about velocity, story points or any other spurious measurement technique you care to come up with. They do care about you asking them questions and always seem delighted to be involved when we want to show them something or give them some feedback (tip: it’s important that this never takes more than 5-10 minutes). If we’re having problems we don’t try and hide them, we explain them to our customer in a way they’ll understand (so far they’ve been very understanding). Ultimately (and directly in our case), it’s our customer who’s paying the bills, not our managers.

If our ultimate goal is to provide as much value to our company as possible then its certainly not by trying to measure the highly subjective productivity of the developers*. If there’s any measurement to be done we should be focusing our efforts on how much we’re satisfying our customer’s needs. There are many ways we could approach this such as seeing how they use our software (or, more importantly, don’t), asking them to provide feedback, imagining if something we built was taken away and how it would impact them, roughly estimating how many people it would take to do the jobs our systems replace and so on. However, I feel it will take some people a lot of convincing that this is the way to go…

*Which, as Fowler points out, we cannot measure anyway

nMock vs RhinoMocks

I’ve recently started using RhinoMocks instead of nMock, mainly because it’s strongly-typed. However, I’ve found a few other little treats:

Stubs

In nMock, if you want to stub some method calls on a mocked interface you have to do something like this:

Mockery mockery = new Mockery();
IService mockService = mockery.NewMock();
Stub.On(mockService).Method("MethodA");
Stub.On(mockService).Method("MethodB");
Stub.On(mockService).Method("MethodC");
...

Which is cumbersome and noisy. In RhinoMocks you can do this:

MockRepository Repository repository = new MockRepository();
IService serviceMock = repository.Stub();
...

…and RhinoMocks will ignore all calls to that interface. This is really nice as you generally only test the SUT’s interaction with one dependency at a time.

Dynamic Mocks

If you only want to test one interaction with a dependency and ignore all others you can create a dynamic mock.

MockRepository Repository repository = new MockRepository();
repository .DynamicMock();
...

All calls to the mocked dependency will be ignored unless they are explicitly expected (e.g. Expect.Call(mockService.MethodA)…..). This is the same as not saying mockery.VerifyAllExpectationsHaveBeenMet() in nMock. It’s always annoyed me that you have to remember to do this in nMock and I much prefer that the default for RhinoMocks is to fail when encountering an unexpected method call.

Raising Events

nMock does not natively support raising events, which is a pain, but there are ways around it (I’ve extended his example to support custom EventArgs which you can download here). With RhinoMocks it’s much simpler. Rather than explaining it myself, check out J-P Boodhoo’s great example here.