Understanding chaos and what it means to software development

Tim Ross pointed me in the direction of a truly mind blowing documentary on the BBC iPlayer about the nature of chaos, it’s fundamental role in the universe and how it explains the behaviour of complex systems such as the way birds fly in flocks and how we evolve. If you’re in the UK I’d highly recommend watching it before it gets taken down next Sunday (24th Jan 2010). It’s called The Secret Life of Chaos.

What, you may ask, has it got to do with software development? Everything actually. If you recognise that you’re working within a complex system, then you must accept the results will be totally unpredictable (that’s the chaos element), because the laws of nature say it will be so. Instead of trying to force it to be predicable (e.g long term planning, estimation based planning etc.) you allow it to behave like a complex system, which is very simple:
  • self-organisation e.g. A flock of birds organise themselves into the most appropriate formation, no one tells them how to do it – there’s no “head bird” orchestrating things.
  • simple rules e.g. Animals are impelled to mate with each other, which results in evolution.
  • feedback e.g. Mating results in offspring which, if they are successful within their system also mate and produce offspring, resulting in animals more suited to their system.

There are a lot of people within software development starting to talk about how we can harness complexity science to create better organisations and software. Here are some examples:

 

Something in Agile needs fixing

At the recent XPDay 2009 conference in London I organised an open space session under the title “Agile isn’t solving our customers problems because they’re not here”. It was driven by my feeling that whilst when Agile is being done well it is improving the reputation of software development, the impact it’s having is relatively minor. My biggest takeaway from the session is that nothing in our Agile toolkit really addresses the needs of our customers*.

In the short time I’ve been involved with the community I’ve seen almost no discussion or articles which involve any contribution from people outside of the IT department and none of the methodologies/name clouds I can think of appear to have been developed or evolved with any collaboration from the people driving the work.

I don’t mean to disparage all the hard work and enthusiasm people (I include myself) have put in to trying to make things better, I just think the fact that we’ve left out the most important people from our discussions has caused us a lot unecessary of pain.  Agile adoption is never a smooth affair, especially the process of “convincing” people outside of the development department. If we involved our customers more in the process of forming our principles, tools and methodologies we would surely get where we all want to be a lot more quickly. Remember, people don’t resist change, they resist being changed.

Here are some examples of what I mean:

  • Speaking to my CEO in the pub the other day he said he often finds it “patronising” when we try and impress on him the importance of the the things we’re trying to do.
  • When I first started with Scrum I countlessly came up against resistance and cynicism when trying to encourage customers to participate in traditional Scrum meetings such as retrospectives, planning and stand ups. Nothing Scrum teaches prepares you for this (I’ve done both CSM and Estimating and Planning courses and read a lot otherwise).
  • The terminology we use (e.g. Scrum, Sprints, Stories, Kanban, eXtreme Programming) is totally immature in some people’s eyes.  One useful takeaway from the session was to stop using inappropriate terminology in front of customers.

I have an enormous amount of respect for our CEO and others within our organisation for putting the trust they do in us even though they (understandably) find so much of what we do beguiling and irritating. In my experience you would be very lucky to be able to work with someone who is prepared to take that kind of risk – in most places I’ve worked you just don’t stand a chance.

For me, the flaw lies deep within Agile. It was never designed to address the needs of customers. XP and Scrum were designed for fixing dysfunctional environments. The terminology was designed to appeal to developers. When most of the Agile principles and methodologies were developed the need was different and they don’t appear to have evolved. New methodologies suffer the same – why does there have to be an introductory guide to Kanban for Managers? If we’re having to try and sell this stuff to people I think we’ve already lost most of the battle.

In the last 15 years or so most of the “microeconomic” problems with software development have been solved. The majority of people writing software may still not be doing it well, but the answers are there if you care to look. The big problems are still out there though and as a community we need to start addressing them and I think the only way we can do this is by getting our customers more involved in the debate.

This post is really a rallying call to all those that feel the same as I do. I’m keen to start doing something about this, but where do we start?



*I use the term customer here to mean anyone who is a customer of a development team.

New team, new principles

The team I’m working with at the moment is at a formative stage and has come up with a set of principles to collectvely aspire to:

Ship Something
Our overriding goal is to add value to the business as quickly and effectively as possible

Done
Our definition of done is when it is live and has been thoroughly tested

No hidden work
All work items should be tracked on the board

Unit Tests
All new or changed code should be thoroughly unit tested

Boy Scout rule
Leave everything in a better condition than you find it

Take risks
We are prepared to take risks with new technology and ideas

Be a tester
It’s everyone’s responsibility to make sure all work is thoroughly tested before being released

Some inconvenient truth

If you leave your PC and monitor on at night you’re using up 985 Kilo Watt Hours(KWH) of electricity per year unecessarily, which is 627 Kg CO2 (see below for my maths)

In the UK,  one KWH of electricity cost around 11p per hour so leaving your PC on overnight costs £129 a year.  If you have, say, 50 employees this means up to £6,450 ($10,500) per year is being spent on electricity you don’t use.

627 Kg C02 is also the equivalent of flying from London to Barcelona and back twice and then doing the same trip by train three more times.

Is it really that inconvenient to turn your PC off at night?


The Maths

The average PC uses between 100 and 200 watts per hour (wh). The average monitor in sleep mode uses around 15wh

So if we go somewhere in the middle (150wh + 15wh for monitor = 175) we can work out:

175wh x 18 hours*  = 3.2 kilo watt hours (KWH) per day 3.2 x 365 days** = 1168 kwh per year 985 x 0.537*** = 627 Kg C02 per year

* not at work

** assuming you don’t even turn it off at the weekend

*** Kg C02 per unit of grid electricity: http://www.carbontrust.co.uk/resource/conversion_factors/default.htm

PC watts usage info from http://michaelbluejay.com/electricity/computers.html

Visualising the internal quality of software: Part 2

In part 1 I talked about how we’re using NDepend and NCover to generate highly visual reports for our projects.

These tools look great, but are of limited use without being able to analyse how changes to the code have affected the metrics. In this article I want to talk about a couple of ways we can use tools like NDepend, NCover and TeamCity to generate other visual reports to support our dashboards.

NDepend

Using VisualNDepend to analyse the dashboards project

VisualNDepend analysing the dashboards project

VisualNDepend is a fantastic tool, but takes time to learn how to use and it often requires considerable digging around to find what you’re looking for.  A more immediate and visual tool is the  NDepend report (example) which can be generated through VisualNDepend or via the NDependConsole.  It contains a nice summary of things such as assembly metrics and CQL queries such as methods with more than 10 lines of code. Importantly here, TeamCity can generate and display NDepend reports using NDependConsole as Laurent Kempé explains here (using MSBuild though it’s just as possible with NAnt or Rake).

However I find even this report contains too much information  so we’ve modified the NDepend configuration file to show only four of the sections (see example) Assemblies Metrics, Assemblies Abstractness vs. Instability, Assemblies Dependencies Diagram and CQL Queries and Constraints. It’s now much easier to read. For example, we can see the assemby metrics at the top show us some of the same metrics used in the dashboards but broken down by each assembly. When this is integrated into TeamCity you merely need to click back to see how any of them have changed since previous builds.

PreviousReport

We can also see the method “WriteChange(…)” clearly needs some love being the top method to refactor. When you compare the two reports side by side it’s easy to see how, just like methods or classes with too many lines of code, too much information can make what are otherwise valuable reports unreadable. I have to admit it took me a long time to get into using NDepend well and a lot of that is down to the overwhelming amount of information it produces.

NCover

NCover

It’s no good finding out your test coverage has gone down if you don’t know why. You could pay for an NCover licence for each developer, but less costly is to integrate the NCover report into TeamCity. Again, Laurent Kempé explains how to do this here and here is an example of the NCover report for our Dasbhoards project.  It doesn’t provide the same amount of detail as the NCover GUI, but will at least give you a good head start in the right direction.

Tabs

So, in the end we have three tabs in our TeamCity project builds which, when used in conjuction with each other, give us a highly visual representation of how modifications are affecting the maintainability of our code. Of course there are many other reasons why code could be problematic but the context these these tools open up make it much easier for developers to learn and understand and therefore be able care more about the maintanability of their projects and the consequences of writing bad code.

Visualising the internal quality of software: Part 1

There are essentially two ways you can discuss the quality of software. External quality is something everyone can appreciate. For example, how easy it is for customers to use one of our products and whether they encounter any problems (bugs) whilst doing so. Internal quality however, is a lot more complex and tricky to get right, but is just as important as external quality (if not more so). To me, internal quality is about how efficiently you can add or modify functionality whilst not breaking any of the existing functionality, not just today but over a long period of time. This is also known as the cost of change.

Another well known term  is software entropy – the more you add to and change a piece software the more complex and difficult it becomes to do so. Eventually the cost of change simply becomes too high and the long and arduous task of replacing the system begins. Of course, doing this this has a massive impact on competitiveness as you’re unable to deliver new functionality until you’ve done so and why it’s so important to make the effort to keep your code in good condition.

In the new world of software development we’re all really keen on visualisation (or “information radiators” as we’re supposed to call them) and with good reason.  It aids conversation and helps identify any pain points so, as a team, we can focus on removing them. A while ago a former colleague and friend of mine Peter Camfield blogged about quality dashboards he and Josh Chisolm had been working on. We’ve recently implemented them at my current company as well. I’d like to go into more detail about the dashboards, some improvements I’ve made and how, in combination with other visualisation tools, we’re able to make the most of them.

trafficlights

The "traffic lights" for one of our applications

The dashboards are created by running NDepend CQL Queries* and NCover on the code and test assemblies for a project, aggregating the results, comparing them to previous recording and outputting them as a set of traffic lights with arrows to show whether the measurement has improved or worsened. The levels at which the traffic lights change colour are mostly based on recommendations from the NDepend Metrics Definition page.

TeamCityDashboardsWe’ve been running them in TeamCity (and added the output as a tab) any time code is checked in. Having lived with these dashboards for a while now I can say I’ve found them invaluable. They’ve raised countless discussions around how changes to the code base have impacted the metrics and really made us think about what good internal quality is. However it’s also been frustrating as until now they’ve only shown changes from one build to the next, so I’ve recently spent some time working on adding line charts (using the Google Chart API) to show how the metrics have changed over time:

line charts

Line charts for the same application as above

The line charts are configured so that “up is good”. They give an immediate (albeit subjective) view on whether the internal quality is improving. We’ve only had these for a few weeks and it will be really interesting to see how easy it is to get all the lines to go up or at least remain stable** and whether improvements in the metrics are reflected in the cost of change.

In part 2 I will talk about how we can use these reports in combination with other visualisations to help us understand how code modifications affect internal quality.

* It would take a long time to go into the detail of the CQL queries suffice to say they were chosen to try and give the broadest picture of the condition of the code without having so many that it just becomes a noise.
** From the small experience I’ve had so far I don’t think it’s going to be very easy.

Come to XPDay 2009

The programme for this year’s XPDay conference in London on 7th & 8th December has been published and it looks like it’s going to be a tub thumper. As well as lots of interesting programmed sessions (with an emphasis on experience reports and technical/programming), there’s not two but three keynotes this year including Dorian Swade on what we can learn from Charles Babbage and a story teller called Terry Saunders, who probably wonders what the heck he’s doing there, but will no doubt provide a welcome diversion from the usual topics. Also, after the success of last year’s open space the second day is pretty much wholly set aside for open space sessions.

On top of all that there’ll also no doubt be lots of entertaining conversation to be had after hours on both evenings in nearby pubs.

Tickets are on sale for £350 for both days which is by far the cheapest conference I’m aware of (as well as also being the best ;-)).

I will also be involved in a session, “Introducing Lean and Agile Practices to a Chaotic Environment” where, along with some of my colleagues (not just developers!), we’ll be discussing how our practices have evolved over the last 12 months. Hope to see you there.

6 Thinking Hats Retrospective Plan

I’ve done this one a couple of times now and had postive feedback both times. It’s a good alternative to the shuffling-cards-around-style retrospectives as it mostly involves talking (albeit in a controlled manner).

You can read about De Bono’s 6 Thinking Hats on Wikipedia where it is described as: “a thinking tool for group discussion and individual thinking. Combined with the idea of parallel thinking which is associated with it, it provides a means for groups to think together more effectively, and a means to plan thinking processes in a detailed and cohesive way”.

Use

The description above sums it up and as I said it’s a good alternative format to more familiar plans

Length of time

Approximately one hour but can be tailored to your needs

Short Description

The team discuss the previous iteration whilst all wearing one of De Bono’s “hats”. They then do the same but wearing another hat until all the hats have been worn. The hats relate to particular ways of thinking and force the group to collectively think and discuss in a particular way. The facilitator documents any output on a whiteboard. The ouput from the last hat (Red) is converted into actions.

Materials

A large whiteboard and 6 coloured cards (one for each hat) and a room with space to arrange chairs in a circle (no table).

Process

Preparation

Arrange chairs in a circle so all the participants are facing each other. Put the colored cards along the top of the whiteboard in order of hat wearing (see below). Be familiar with all the “hats”.

Introduction

Once everyone is seated introduce the exercise by giving a brief summary of De Bono’s Six Thinking Hats process. Then explain that the group will all put on the same hat and discuss the iteration (what went well, want didn’t go so well, what can we do to improve things) for 10 minutes and after that they will put on the next hat in the series and so on until the all the hats have been worn.

Very Important: If at anytime anyone starts talking in a manner not appropriate for the current hat interrupt the discussion and say something like: “That’s great Black Hat thinking, but we’re not wearing that hat right now. Remember, we’re wearing our Green Hats which are about alternatives and learning so please try to discuss the subject in this manner”.

Tip: The facilitator should try to stay out of the circle and try to avoud the participants talking directly to them. This is tricky as people have a habit of watching what you’re writing on the board. Try to block the board so they’re not distracted.

Order of hats

According to Wikipedia the order of hats most suited to process improvement is  Blue, White, White (Other peoples views), Yellow, Black, Green, Red, Blue but for this exercise we will use:

Blue, White, Yellow, Black, Green, Red

Blue Hat (5 minutes)

Use the blue hat to discuss the objectives for the session and write the output on the whiteboard.

White Hat (10 minutes)

Participants raise  and discuss anything from the last iteration which can be said to be a fact or information. Hunches and feelings and any discussion of reasons or other non information based output should be left for the appropriate hat.

Yellow Hat (10 minutes)

Participants can only talk about the good things that happened in the last iteration.

Black Hat (10 minutes)

Participants can only talk about the bad things that happened, any negative criticism they have or worst case scenarios they can think of.

Green Hat (10 minutes)

The discussion moves on to any ideas people have about solving problems or things that may add more value to the business or help in any way. Outside of the box helicopter view blue sky thinking is encouraged.

Red Hat (5 Minutes)

Give the participants a short period of time in which they can come up to the board and write down 2 emotive staments each. These could be the issues that have stood out for them the most or an idea for solving a problem. These statements should be instinctive which is why you will give them very little time to do this.

Conclusion and Actions

Spend a little time as a group having a look at the Red Hat output. Are there any themes? Do any of them have relationships to each other. Do any particularly stand out? From this get the group to decide on a couple of actions to take away. As always ensure the actions are very easy to execute (so nothing like “write more unit tests” or “refactor the database” and more like “try to write test first this iteration” and “arrange a meeting with the DBA to discuss a strategy for refactoring the database”).

How to initialise a class without calling the constructor (.Net)

Sometimes we want to test some really nasty legacy code but are inhibited by constructors taking tricky things like HttpWhatevers, God objects and so on which you do not care about but would require enormous effort setting up just to try and get an instance of the damn thing so you can test your method.

One way out is to create a parameterless constructor on the class which is only used for testing. Not at all nice, but sometimes necessary to create that first seam.

A candidate I was pair interviewing with introduced me to something which may prove preferable in these cases – the Microsoft serialization library has a method which will initialize a class without calling the constructor:

FormatterServices.GetSafeUninitializedObject
http://msdn.microsoft.com/en-us/library/system.runtime.serialization.formatterservices.getsafeuninitializedobject.aspx

This way you don’t have to modify the code!

I would only ever advise using this if your only other sensible option would be to override the constructor. Hopefully once you have your tests you would be able to confidently refactor out the problematic code.

The same principles apply

The most obvious refactoring analogy I can think of is communal areas such as the kitchen of a shared flat. It’s everyone’s responsibility to keep it clean but often it quickly gets in a mess because people don’t bother to clean up after themselves. Sure, the cycle time to getting a meal may be quick, but after a while the kitchen becomes unusable. Finally a huge amount of effort has to be put in to cleaning it as some of the dirt such as on the cooker is really caked in by then. Other things are beyond cleaning have to be thrown away altogether.

Yesterday I spent a few minutes tidying the bookshelf at work. There was stuff on the shelves which shouldn’t have been there such as screws and mobile phone chargers ( commented out/redundant code), planning stuff spread across multiple shelves and mixed in with books (poor cohesion) and various colours and sizes of index cards in big unsorted piles (obfuscated unreadable code).

The same principles apply – leave it in a better condition than you found it. Be considerate of your colleagues and everyone benefits.