Some inconvenient truth

If you leave your PC and monitor on at night you’re using up 985 Kilo Watt Hours(KWH) of electricity per year unecessarily, which is 627 Kg CO2 (see below for my maths)

In the UK,  one KWH of electricity cost around 11p per hour so leaving your PC on overnight costs £129 a year.  If you have, say, 50 employees this means up to £6,450 ($10,500) per year is being spent on electricity you don’t use.

627 Kg C02 is also the equivalent of flying from London to Barcelona and back twice and then doing the same trip by train three more times.

Is it really that inconvenient to turn your PC off at night?


The Maths

The average PC uses between 100 and 200 watts per hour (wh). The average monitor in sleep mode uses around 15wh

So if we go somewhere in the middle (150wh + 15wh for monitor = 175) we can work out:

175wh x 18 hours*  = 3.2 kilo watt hours (KWH) per day 3.2 x 365 days** = 1168 kwh per year 985 x 0.537*** = 627 Kg C02 per year

* not at work

** assuming you don’t even turn it off at the weekend

*** Kg C02 per unit of grid electricity: http://www.carbontrust.co.uk/resource/conversion_factors/default.htm

PC watts usage info from http://michaelbluejay.com/electricity/computers.html

Visualising the internal quality of software: Part 2

In part 1 I talked about how we’re using NDepend and NCover to generate highly visual reports for our projects.

These tools look great, but are of limited use without being able to analyse how changes to the code have affected the metrics. In this article I want to talk about a couple of ways we can use tools like NDepend, NCover and TeamCity to generate other visual reports to support our dashboards.

NDepend

Using VisualNDepend to analyse the dashboards project

VisualNDepend analysing the dashboards project

VisualNDepend is a fantastic tool, but takes time to learn how to use and it often requires considerable digging around to find what you’re looking for.  A more immediate and visual tool is the  NDepend report (example) which can be generated through VisualNDepend or via the NDependConsole.  It contains a nice summary of things such as assembly metrics and CQL queries such as methods with more than 10 lines of code. Importantly here, TeamCity can generate and display NDepend reports using NDependConsole as Laurent Kempé explains here (using MSBuild though it’s just as possible with NAnt or Rake).

However I find even this report contains too much information  so we’ve modified the NDepend configuration file to show only four of the sections (see example) Assemblies Metrics, Assemblies Abstractness vs. Instability, Assemblies Dependencies Diagram and CQL Queries and Constraints. It’s now much easier to read. For example, we can see the assemby metrics at the top show us some of the same metrics used in the dashboards but broken down by each assembly. When this is integrated into TeamCity you merely need to click back to see how any of them have changed since previous builds.

PreviousReport

We can also see the method “WriteChange(…)” clearly needs some love being the top method to refactor. When you compare the two reports side by side it’s easy to see how, just like methods or classes with too many lines of code, too much information can make what are otherwise valuable reports unreadable. I have to admit it took me a long time to get into using NDepend well and a lot of that is down to the overwhelming amount of information it produces.

NCover

NCover

It’s no good finding out your test coverage has gone down if you don’t know why. You could pay for an NCover licence for each developer, but less costly is to integrate the NCover report into TeamCity. Again, Laurent Kempé explains how to do this here and here is an example of the NCover report for our Dasbhoards project.  It doesn’t provide the same amount of detail as the NCover GUI, but will at least give you a good head start in the right direction.

Tabs

So, in the end we have three tabs in our TeamCity project builds which, when used in conjuction with each other, give us a highly visual representation of how modifications are affecting the maintainability of our code. Of course there are many other reasons why code could be problematic but the context these these tools open up make it much easier for developers to learn and understand and therefore be able care more about the maintanability of their projects and the consequences of writing bad code.

Visualising the internal quality of software: Part 1

There are essentially two ways you can discuss the quality of software. External quality is something everyone can appreciate. For example, how easy it is for customers to use one of our products and whether they encounter any problems (bugs) whilst doing so. Internal quality however, is a lot more complex and tricky to get right, but is just as important as external quality (if not more so). To me, internal quality is about how efficiently you can add or modify functionality whilst not breaking any of the existing functionality, not just today but over a long period of time. This is also known as the cost of change.

Another well known term  is software entropy – the more you add to and change a piece software the more complex and difficult it becomes to do so. Eventually the cost of change simply becomes too high and the long and arduous task of replacing the system begins. Of course, doing this this has a massive impact on competitiveness as you’re unable to deliver new functionality until you’ve done so and why it’s so important to make the effort to keep your code in good condition.

In the new world of software development we’re all really keen on visualisation (or “information radiators” as we’re supposed to call them) and with good reason.  It aids conversation and helps identify any pain points so, as a team, we can focus on removing them. A while ago a former colleague and friend of mine Peter Camfield blogged about quality dashboards he and Josh Chisolm had been working on. We’ve recently implemented them at my current company as well. I’d like to go into more detail about the dashboards, some improvements I’ve made and how, in combination with other visualisation tools, we’re able to make the most of them.

trafficlights

The "traffic lights" for one of our applications

The dashboards are created by running NDepend CQL Queries* and NCover on the code and test assemblies for a project, aggregating the results, comparing them to previous recording and outputting them as a set of traffic lights with arrows to show whether the measurement has improved or worsened. The levels at which the traffic lights change colour are mostly based on recommendations from the NDepend Metrics Definition page.

TeamCityDashboardsWe’ve been running them in TeamCity (and added the output as a tab) any time code is checked in. Having lived with these dashboards for a while now I can say I’ve found them invaluable. They’ve raised countless discussions around how changes to the code base have impacted the metrics and really made us think about what good internal quality is. However it’s also been frustrating as until now they’ve only shown changes from one build to the next, so I’ve recently spent some time working on adding line charts (using the Google Chart API) to show how the metrics have changed over time:

line charts

Line charts for the same application as above

The line charts are configured so that “up is good”. They give an immediate (albeit subjective) view on whether the internal quality is improving. We’ve only had these for a few weeks and it will be really interesting to see how easy it is to get all the lines to go up or at least remain stable** and whether improvements in the metrics are reflected in the cost of change.

In part 2 I will talk about how we can use these reports in combination with other visualisations to help us understand how code modifications affect internal quality.

* It would take a long time to go into the detail of the CQL queries suffice to say they were chosen to try and give the broadest picture of the condition of the code without having so many that it just becomes a noise.
** From the small experience I’ve had so far I don’t think it’s going to be very easy.

Come to XPDay 2009

The programme for this year’s XPDay conference in London on 7th & 8th December has been published and it looks like it’s going to be a tub thumper. As well as lots of interesting programmed sessions (with an emphasis on experience reports and technical/programming), there’s not two but three keynotes this year including Dorian Swade on what we can learn from Charles Babbage and a story teller called Terry Saunders, who probably wonders what the heck he’s doing there, but will no doubt provide a welcome diversion from the usual topics. Also, after the success of last year’s open space the second day is pretty much wholly set aside for open space sessions.

On top of all that there’ll also no doubt be lots of entertaining conversation to be had after hours on both evenings in nearby pubs.

Tickets are on sale for £350 for both days which is by far the cheapest conference I’m aware of (as well as also being the best ;-)).

I will also be involved in a session, “Introducing Lean and Agile Practices to a Chaotic Environment” where, along with some of my colleagues (not just developers!), we’ll be discussing how our practices have evolved over the last 12 months. Hope to see you there.

6 Thinking Hats Retrospective Plan

I’ve done this one a couple of times now and had postive feedback both times. It’s a good alternative to the shuffling-cards-around-style retrospectives as it mostly involves talking (albeit in a controlled manner).

You can read about De Bono’s 6 Thinking Hats on Wikipedia where it is described as: “a thinking tool for group discussion and individual thinking. Combined with the idea of parallel thinking which is associated with it, it provides a means for groups to think together more effectively, and a means to plan thinking processes in a detailed and cohesive way”.

Use

The description above sums it up and as I said it’s a good alternative format to more familiar plans

Length of time

Approximately one hour but can be tailored to your needs

Short Description

The team discuss the previous iteration whilst all wearing one of De Bono’s “hats”. They then do the same but wearing another hat until all the hats have been worn. The hats relate to particular ways of thinking and force the group to collectively think and discuss in a particular way. The facilitator documents any output on a whiteboard. The ouput from the last hat (Red) is converted into actions.

Materials

A large whiteboard and 6 coloured cards (one for each hat) and a room with space to arrange chairs in a circle (no table).

Process

Preparation

Arrange chairs in a circle so all the participants are facing each other. Put the colored cards along the top of the whiteboard in order of hat wearing (see below). Be familiar with all the “hats”.

Introduction

Once everyone is seated introduce the exercise by giving a brief summary of De Bono’s Six Thinking Hats process. Then explain that the group will all put on the same hat and discuss the iteration (what went well, want didn’t go so well, what can we do to improve things) for 10 minutes and after that they will put on the next hat in the series and so on until the all the hats have been worn.

Very Important: If at anytime anyone starts talking in a manner not appropriate for the current hat interrupt the discussion and say something like: “That’s great Black Hat thinking, but we’re not wearing that hat right now. Remember, we’re wearing our Green Hats which are about alternatives and learning so please try to discuss the subject in this manner”.

Tip: The facilitator should try to stay out of the circle and try to avoud the participants talking directly to them. This is tricky as people have a habit of watching what you’re writing on the board. Try to block the board so they’re not distracted.

Order of hats

According to Wikipedia the order of hats most suited to process improvement is  Blue, White, White (Other peoples views), Yellow, Black, Green, Red, Blue but for this exercise we will use:

Blue, White, Yellow, Black, Green, Red

Blue Hat (5 minutes)

Use the blue hat to discuss the objectives for the session and write the output on the whiteboard.

White Hat (10 minutes)

Participants raise  and discuss anything from the last iteration which can be said to be a fact or information. Hunches and feelings and any discussion of reasons or other non information based output should be left for the appropriate hat.

Yellow Hat (10 minutes)

Participants can only talk about the good things that happened in the last iteration.

Black Hat (10 minutes)

Participants can only talk about the bad things that happened, any negative criticism they have or worst case scenarios they can think of.

Green Hat (10 minutes)

The discussion moves on to any ideas people have about solving problems or things that may add more value to the business or help in any way. Outside of the box helicopter view blue sky thinking is encouraged.

Red Hat (5 Minutes)

Give the participants a short period of time in which they can come up to the board and write down 2 emotive staments each. These could be the issues that have stood out for them the most or an idea for solving a problem. These statements should be instinctive which is why you will give them very little time to do this.

Conclusion and Actions

Spend a little time as a group having a look at the Red Hat output. Are there any themes? Do any of them have relationships to each other. Do any particularly stand out? From this get the group to decide on a couple of actions to take away. As always ensure the actions are very easy to execute (so nothing like “write more unit tests” or “refactor the database” and more like “try to write test first this iteration” and “arrange a meeting with the DBA to discuss a strategy for refactoring the database”).

How to initialise a class without calling the constructor (.Net)

Sometimes we want to test some really nasty legacy code but are inhibited by constructors taking tricky things like HttpWhatevers, God objects and so on which you do not care about but would require enormous effort setting up just to try and get an instance of the damn thing so you can test your method.

One way out is to create a parameterless constructor on the class which is only used for testing. Not at all nice, but sometimes necessary to create that first seam.

A candidate I was pair interviewing with introduced me to something which may prove preferable in these cases – the Microsoft serialization library has a method which will initialize a class without calling the constructor:

FormatterServices.GetSafeUninitializedObject
http://msdn.microsoft.com/en-us/library/system.runtime.serialization.formatterservices.getsafeuninitializedobject.aspx

This way you don’t have to modify the code!

I would only ever advise using this if your only other sensible option would be to override the constructor. Hopefully once you have your tests you would be able to confidently refactor out the problematic code.

The same principles apply

The most obvious refactoring analogy I can think of is communal areas such as the kitchen of a shared flat. It’s everyone’s responsibility to keep it clean but often it quickly gets in a mess because people don’t bother to clean up after themselves. Sure, the cycle time to getting a meal may be quick, but after a while the kitchen becomes unusable. Finally a huge amount of effort has to be put in to cleaning it as some of the dirt such as on the cooker is really caked in by then. Other things are beyond cleaning have to be thrown away altogether.

Yesterday I spent a few minutes tidying the bookshelf at work. There was stuff on the shelves which shouldn’t have been there such as screws and mobile phone chargers ( commented out/redundant code), planning stuff spread across multiple shelves and mixed in with books (poor cohesion) and various colours and sizes of index cards in big unsorted piles (obfuscated unreadable code).

The same principles apply – leave it in a better condition than you found it. Be considerate of your colleagues and everyone benefits.

We are hiring in London

If you’re interested in working for the second biggest music download provider in the world, with an Agile and learning culture based in the coolest area of London (the Silicon Roundabout) we have openings for:

Quality Analysts/Testers

Developers

User Experience Developer

If you know me personally please get in touch, otherwise go here for more details:

http://www.7digital.com/business/careers

Please do not bother if you are a recruitment agency, seriously. You will be wasting your time.

Read books and earn more money

If I was going to offer one piece of advice to anyone aspiring to be a top class software developer* (apart from writing lots of code) it would be to read books. Not just any books though, books written by masters.

Experience often counts for little in software development. If you’ve spent your whole career in the same shop with little exposure to other languages or people outside your organisation it’s quite possible that some 21 year old upstart with a copy of Clean Code under his or her arm will wipe the floor with you when it comes to effectively writing and maintaining software.

Granted, working with good or even great developers will mean a lot will rub off on you. I’ve learnt countless lessons from the people I’ve worked with, but if I look around me people are no older than 30 at most with an average of around 5 years developing software in probably no more than 3 different organisations.

People like Martin Fowler, Eric Gamma, Kent Beck, Robert C Martin, Craig Larman and Michael Feathers have been at it for 25 years or more and in that time have slowly built up the kind of reputation you only get from regularly being right.

Also granted, blogs are are an invaluable resource, but are rarely little more then a meme in someone’s head and give you nothing like the deep contextual insight you can get from a well written book. There is also little to assure you that the author is anymore likely to have a better idea than yourself. Believe me when I say there are many people blogging who rarely live up to the practices they preach and are no more or less likely than you to know the right or wrong way of doing something.

I have learnt a lot from both colleagues and blogs, but both pale to what I’ve learnt from the books I’ve read. I can comfortably say there is no way I would be where I am today without them and I strongly believe it will earn you more money. When you think of all the things you could do to try and put more folding stuff in you’re back pocket it’s a relatively simple win!

I’ve been inspired to write this after reading Eric Evans Domain Driven Design which has gone right into my top 5 books of all time. Why is it so good? It’s not because Eric has necessarily been born with some supernatural instinct for writing great software or that Domain Driven Design is going to save the planet. It’s because it’s full of the lessons that Eric has learnt in his long and illustrious career, carefully woven into a highly readable narrative. There’s nothing particularly new here. Like all the great books I’ve read it is no more than a distillation of the practices in the industry which through time have proven to be the most effective. I remember Martin Fowler once saying that people often asked him what would be the future of software development. His answer was that to see the future you only have to look to the past.

Below is a list of the books which significantly changed the way I think and work more than any other’s I’ve read. No doubt you’ve heard of them all already and are on a list of books to read in the back of your mind somewhere and I’m sure there are plenty of others that had as significant an impact on people as these have on me. However I think few will argue that any of these books do not deserve their place on this list. All I can say is get on and read them if you haven’t already.

Refactoring: Improving the Design of Existing Code by Martin Fowler
Domain Driven Design: Tackling Conplexity in the Heart of Software by Eric Evans
Working Effectively with Legacy Code by Micheal Feathers
Agile Software Development: Principles, Patterns and Practices by Robert C Martin
Clean Code: A Handbook of Agile Software Crafstmanship by Robert C Martin
xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros
Lean Software Development by Mary and Tom Poppendieck
The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer by Jeffry LIker

*I am in no way professing to be a top class developer 🙂

The roles and responsibilities of an agile Software Team

We’re currently going through a highly transformational period at work introducing many new concepts to people intended to better manage the flow of work and the long term sustainability of the organsation. It became clear there was a need to have some means by which to guide people as to what was expected from them and the previous job descriptions were no longer appropriate. This article contains the new responsibilities and the justification for them which I thought may prove useful to others.

Firstly, credit to Portia Tung and Pascal Van Cauwenberghe as their article on the Roles and Responsibilities of an Agile Team inspired much of what follows.

Roles

Disclaimer: These are quite generic descriptions which I hope may prove useful if you are required to formulate such things, however they are still specific to our needs. We don’t have business analysts, project managers or architects for example (and currently do not feel the need for them either) so if you feel there’s something missing from where you work that’s because it probably is.

Team Member
Developer / Tester
Lead Developer
Team Leader
Principal Developer

Some more details

  • Everyone is a Team Member first and foremost. It was interesting when drawing up these job descriptions how almost all the responsibilities are applicable to a Team Member with the other roles (in most cases) adding little more than extensions to the same responsibilities.
  • Some of the roles aren’t positions in themselves. For example, the Team Leader role does not map directly to a project manager (we don’t have any) – it is held by whoever is most appropriate (it could be the Lead Developer but it could be a tester or developer).
  • Each responsibility follows the format of the explanation of the responsibility (in bold) followed by the justification e.g.

To ensure deliverables meet the acceptance criteria given so that we do not waste time reworking them.

I think this is really important. I’ve rarely ever seen a job description which justified itself. It’s like a User story with the “so that…” part missing.

Obective

The overriding objective was to provide a description of the desired environment within which self-organisation and self-empowerment are the preferred management approach. This can be directly related back to two principles in the Agile Manifesto:

The best architectures, requirements, and designs
emerge from self-organizing teams.

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.

“Respect for People” is also a one of the two main pillars of the Toyota Way (the other being continuous improvement). In my mind there is no better way to grow people within your organisation than empowering them and respecting and trusting them to do their job (interestingly it appears this pillar is commonly overlooked resulting in the Toyota Half-Way).