Author Archives: Rob

A call to action to UK software developers to stop your money being wasted

As I write, the petition to demand the UK government reviews it’s outdated I.T. project processes has been going for a week and has 193 signatures.

Am I disappointed? Far from it. The signatories list reads like a who’s who of people in the UK who care about software. Among the countless thought leaders, authors, conference speakers and influential bloggers, signatories includes:

  • Steve Freeman, winner of the Gordon Pask award and author
  • Rachel Davies, director of the Agile Alliance and author
  • Karl Scotland, founder member of the Lean Software and Systems Consortium.
  • Mike Hill, conference chair for SPA
  • Giovanni Asproni, conference chair for ACCU
  • Keith Braithwaite, conference chair for XPDay

Lets face it, this is never going to be a populist campaign. The simple matter is no one else is going to tell our government that there’s a more effective way to manage software*. It’s certainly not in the interests of people like BT and Siemens to stop signing multi-million pound contracts and I’ve no doubt there are very few people working in an advisory manner to the government on I.T. strategies who are particularly aware or interested in the now well-established and highly successful Agile umbrella of ways to build effective software, both on time and on budget.

So it’s up to us to highlight this situation. No one else is going to do it for us. As member of the software development community in the UK, your money is not going towards hospitals or schools, your money is being wasted on failed I.T. projects and will continue to be wasted** until our government stops naively signing off massive contracts for hugely optimistic and unrealistic projects.

I’m calling on everyone involved in software development in this country to do more to try and raise the issue to the level of visibility it deserves:

  1. Sign the petition if you haven’t done so already
  2. Write to your MP informing them about the petition and personally demanding a review. I’ve written a sample letter here which you can use as a template, but you should try to use your own words as much as possible otherwise it’s likely they will ignore it. Some more tips are available here.
  3. Blog about the petition. Tell people to sign it and email their MP and blog about it too. Twitter is great but blogging is better.

Nothing is going to change unless you get involved and demand your hard earned cash is better spent.



* An open letter was written to the Government by a bunch of academics about the problems with the NHS I.T. project, but was woefully misguided asking to see, among others, documents showing the “detailed design” and “technical architecture” for what must be the most idealistically naive and over-ambitious software project ever undertaken.

**According to IT Jobs Watch the average salary for a developer in the UK is £37,000 which means that, on average, they will contribute around £6,000 per year in tax. We’ll also say for the sake of argument that our average Joe works 40 years in his/her lifetime so in total he/she will pay £240,000 in tax. It would take 100,000 developer lifetimes to accrue the estimated £26 billion that has been wasted so far on failed Government I.T. projects.

Sign my petition to the PM to demand the UK Govt. reviews it’s failed IT processes

The Independent newspaper recently reported that the UK government has wasted £26 billion of it’s tax payers money on IT projects which have, “run millions of pounds over budget or have been cancelled altogether”.

£26 billion!

Of the projects mentioned in the article I’ve particularly been following the massive NHS balls up for some time and it’s quite clear that most of the reasons it has failed so badly can be attributed to following Waterfall type project management processes, dooming these behemoth’s to failure from the outset. The only value these projects are delivering is lining the pockets of companies like BT and Fujitsu who’ve landed most of the contracts for this work. There’s no doubt that most of the other cash sink holes mentioned suffer from the same problems* (it was the UK government that created PRINCE2 project management after all).

I find this totally unacceptable, especially when we now know that there are now well-established alternative approaches to the way these projects are managed which would have likely saved the tax payer an absolute fortune.

Keith Braithwaite has also blogged his views about the Government IT failures here.

I’ve created a petition on the Number10 website asking the Prime Minister to  demand a review of the out-of-date manner in which government IT projects are undertaken. I urge you to sign it and Tweet, blog, Facebook this to everyone you know**:

http://petitions.number10.gov.uk/ITProcessReview/

Thank you.

Update: If you don’t think it’s worth signing as it will have no impact have a look at this article on the BBC about the UK Goverment’s support for IE6


*Ironically the National Audit Office are currently producing a (delayed) report on the NHS IT project. They still advocate the Waterfall Approach according to this document found here. XtC members are in the process of writing to them about this. That letter can be found here.

**UK residents only. I ‘ve been involved in these petitions in the past and if they get enough support they do get responded to.


Less interface intensive dependency injection (C#)

In a study session at work last week we were having a discussion about how with dependency injection you can end up with loads of anaemic interfaces. Arguably these interfaces provide little value. However in C# at least there’s an alternative (thanks Josh and I think also Joe Campbell for this idea) – instead of taking an interface for a dependency in the client’s constructor, use a delegate instead.

So, instead of this “poor man’s” dependency injection example from my last post:

public interface IUserPaymentRepository
{
     void MakeTransaction(Price amount);
}

public class TrackPaymentActivity
{
     private UserPaymentRepository _userPaymentRepository;

     public TrackPaymentActivity():this(new UserPaymentRepository())
     {
     }

     public TrackPaymentActivity(IUserPaymentRepository userPaymentRepository)
     {
          this._userPaymentRepository = userPaymentRepository;
     }

     public AttemptToPayForTrack()
     {
          ......
          _userPaymentRepository.MakeTransaction(trackPrice);
          ......
     }
}

you can do this:

public class TrackPaymentActivity
{
     private Action _makeTransaction;

     public TrackPaymentActivity(Action makeTransaction)
     {
          this._makeTransaction = _makeTransaction;
     }

     public AttemptToPayForTrack()
     {
          ......
          _makeTransaction(trackPrice);
          ......
     }
}

So how do you test this? Mocking frameworks don’t (and probably couldn’t) support delegates so you’ll need to create an interface which has a method with the signature of the delegate, but only for testing purposes:

internal interface ITestPaymentTransaction
{
     void MakeTransaction(Price amount)
}

[Test]
public void Should_Take_Correct_Payment_Amount_For_Track_From_User()
{
     IUserPaymentRepository mockedTransaction =
               MockRepository.GenererateMock();

     new TrackPaymentActivity(mockedTransaction.MakeTransaction)
               .AttemptToPayForTrack();

     mockedTransaction.AssertWasCalled(transaction => transaction.MakeTransaction(expectedAmount));
}

In most situations I think this is preferable. You’re still having to create an interface, but it’s not creating noise in your production code. It also means you can’t use an IoC container, but as I said in my last post, in many situations you probably don’t need to anyway.

Not so poor man’s dependency injection

Uncle Bob’s written a post explaining why he tries to be very sparing with the use of IoC containers. I couldn’t agree more. Overuse of IoC containers can end up with a huge amount of indirection and noise in compared to the value they may provide*.

In my mind, the main benefit of dependency injection is testing and is crucial to being able to do TDD. If you do TDD you want to test one thing at a time (one logical assertion per test). If you find you need another piece of logic which is not the responsibility of the object you’re testing you don’t carry on writing reams of code until your test passes, you create a stub for the piece of functionality, make your test pass and only then consider implementing the stubbed out code.

I’ve always been a fan of poor man’s dependency injection. I can’t find any good examples on t’web so I’ll give an example (in C#). Poor man’s basically means having a default parameterless constructor and an overloaded one which takes the dependencies. The default constructor calls the overloaded one with concrete instances of any dependencies:

public interface IUserPaymentRepository
{
     void MakeTransaction(Price amount);
}

public class TrackPaymentActivity
{
     private UserPaymentRepository _userPaymentRepository;

     public TrackPaymentActivity():this(new UserPaymentRepository())
     {
     }

     public TrackPaymentActivity(IUserPaymentRepository userPaymentRepository)
     {
          this._userPaymentRepository = userPaymentRepository;
     }

     public AttemptToPayForTrack()
     {
          ......
          _userPaymentRepository.MakeTransaction(trackPrice);
          ......
     }
}

This allows you to call the parameterless constructor in your production code, but the overloaded one from your tests. I guess this is where it gets it’s bad name as test hooks are pretty scorned upon and there’s no denying that’s what’s going on here.

Here’s the thing – in many of the situations where you need to inject a dependency it’s for one piece of behaviour and there’s no decision going on to determine which object you need for that behaviour – you know there’s only one place in your code base that has that functionality. In other words, the dependency is still there! It’s just that by using an IoC container you’ve made it magically look like it’s decoupled. With poor man’s it’s easy to see these dependencies but if you’ve palmed it all of to an IoC  you’re gonna end up having no idea what’s using what until you fire up the good ol’ debugger. What’s worse is your code metrics won’t pick it up giving you the impression your code is in a better condition than it actually is.

Of course there is a time and a place for IoC containers. It’s just that it’s probably a lot less than you thought. If there’s a decision to be made at runtime about which type that impliments IUserPaymentRepository should be used or there’s more than one member on the interface (suggesting that it is stateful) then an IoC container would be desirable, otherwise I’m often quite happy being poor.

 

 

*I actually don’t have a huge amount of experience with IoC containers, but that’s because like Uncle Bob, I’ve always tried to avoid them until absolutely necessary, however I do have first hand reports (within orgs I’ve worked for) of where IoC abuse has caused very expensive problems.

Understanding chaos and what it means to software development

Tim Ross pointed me in the direction of a truly mind blowing documentary on the BBC iPlayer about the nature of chaos, it’s fundamental role in the universe and how it explains the behaviour of complex systems such as the way birds fly in flocks and how we evolve. If you’re in the UK I’d highly recommend watching it before it gets taken down next Sunday (24th Jan 2010). It’s called The Secret Life of Chaos.

What, you may ask, has it got to do with software development? Everything actually. If you recognise that you’re working within a complex system, then you must accept the results will be totally unpredictable (that’s the chaos element), because the laws of nature say it will be so. Instead of trying to force it to be predicable (e.g long term planning, estimation based planning etc.) you allow it to behave like a complex system, which is very simple:
  • self-organisation e.g. A flock of birds organise themselves into the most appropriate formation, no one tells them how to do it – there’s no “head bird” orchestrating things.
  • simple rules e.g. Animals are impelled to mate with each other, which results in evolution.
  • feedback e.g. Mating results in offspring which, if they are successful within their system also mate and produce offspring, resulting in animals more suited to their system.

There are a lot of people within software development starting to talk about how we can harness complexity science to create better organisations and software. Here are some examples:

 

Something in Agile needs fixing

At the recent XPDay 2009 conference in London I organised an open space session under the title “Agile isn’t solving our customers problems because they’re not here”. It was driven by my feeling that whilst when Agile is being done well it is improving the reputation of software development, the impact it’s having is relatively minor. My biggest takeaway from the session is that nothing in our Agile toolkit really addresses the needs of our customers*.

In the short time I’ve been involved with the community I’ve seen almost no discussion or articles which involve any contribution from people outside of the IT department and none of the methodologies/name clouds I can think of appear to have been developed or evolved with any collaboration from the people driving the work.

I don’t mean to disparage all the hard work and enthusiasm people (I include myself) have put in to trying to make things better, I just think the fact that we’ve left out the most important people from our discussions has caused us a lot unecessary of pain.  Agile adoption is never a smooth affair, especially the process of “convincing” people outside of the development department. If we involved our customers more in the process of forming our principles, tools and methodologies we would surely get where we all want to be a lot more quickly. Remember, people don’t resist change, they resist being changed.

Here are some examples of what I mean:

  • Speaking to my CEO in the pub the other day he said he often finds it “patronising” when we try and impress on him the importance of the the things we’re trying to do.
  • When I first started with Scrum I countlessly came up against resistance and cynicism when trying to encourage customers to participate in traditional Scrum meetings such as retrospectives, planning and stand ups. Nothing Scrum teaches prepares you for this (I’ve done both CSM and Estimating and Planning courses and read a lot otherwise).
  • The terminology we use (e.g. Scrum, Sprints, Stories, Kanban, eXtreme Programming) is totally immature in some people’s eyes.  One useful takeaway from the session was to stop using inappropriate terminology in front of customers.

I have an enormous amount of respect for our CEO and others within our organisation for putting the trust they do in us even though they (understandably) find so much of what we do beguiling and irritating. In my experience you would be very lucky to be able to work with someone who is prepared to take that kind of risk – in most places I’ve worked you just don’t stand a chance.

For me, the flaw lies deep within Agile. It was never designed to address the needs of customers. XP and Scrum were designed for fixing dysfunctional environments. The terminology was designed to appeal to developers. When most of the Agile principles and methodologies were developed the need was different and they don’t appear to have evolved. New methodologies suffer the same – why does there have to be an introductory guide to Kanban for Managers? If we’re having to try and sell this stuff to people I think we’ve already lost most of the battle.

In the last 15 years or so most of the “microeconomic” problems with software development have been solved. The majority of people writing software may still not be doing it well, but the answers are there if you care to look. The big problems are still out there though and as a community we need to start addressing them and I think the only way we can do this is by getting our customers more involved in the debate.

This post is really a rallying call to all those that feel the same as I do. I’m keen to start doing something about this, but where do we start?



*I use the term customer here to mean anyone who is a customer of a development team.

New team, new principles

The team I’m working with at the moment is at a formative stage and has come up with a set of principles to collectvely aspire to:

Ship Something
Our overriding goal is to add value to the business as quickly and effectively as possible

Done
Our definition of done is when it is live and has been thoroughly tested

No hidden work
All work items should be tracked on the board

Unit Tests
All new or changed code should be thoroughly unit tested

Boy Scout rule
Leave everything in a better condition than you find it

Take risks
We are prepared to take risks with new technology and ideas

Be a tester
It’s everyone’s responsibility to make sure all work is thoroughly tested before being released

Some inconvenient truth

If you leave your PC and monitor on at night you’re using up 985 Kilo Watt Hours(KWH) of electricity per year unecessarily, which is 627 Kg CO2 (see below for my maths)

In the UK,  one KWH of electricity cost around 11p per hour so leaving your PC on overnight costs £129 a year.  If you have, say, 50 employees this means up to £6,450 ($10,500) per year is being spent on electricity you don’t use.

627 Kg C02 is also the equivalent of flying from London to Barcelona and back twice and then doing the same trip by train three more times.

Is it really that inconvenient to turn your PC off at night?


The Maths

The average PC uses between 100 and 200 watts per hour (wh). The average monitor in sleep mode uses around 15wh

So if we go somewhere in the middle (150wh + 15wh for monitor = 175) we can work out:

175wh x 18 hours*  = 3.2 kilo watt hours (KWH) per day 3.2 x 365 days** = 1168 kwh per year 985 x 0.537*** = 627 Kg C02 per year

* not at work

** assuming you don’t even turn it off at the weekend

*** Kg C02 per unit of grid electricity: http://www.carbontrust.co.uk/resource/conversion_factors/default.htm

PC watts usage info from http://michaelbluejay.com/electricity/computers.html

Visualising the internal quality of software: Part 2

In part 1 I talked about how we’re using NDepend and NCover to generate highly visual reports for our projects.

These tools look great, but are of limited use without being able to analyse how changes to the code have affected the metrics. In this article I want to talk about a couple of ways we can use tools like NDepend, NCover and TeamCity to generate other visual reports to support our dashboards.

NDepend

Using VisualNDepend to analyse the dashboards project

VisualNDepend analysing the dashboards project

VisualNDepend is a fantastic tool, but takes time to learn how to use and it often requires considerable digging around to find what you’re looking for.  A more immediate and visual tool is the  NDepend report (example) which can be generated through VisualNDepend or via the NDependConsole.  It contains a nice summary of things such as assembly metrics and CQL queries such as methods with more than 10 lines of code. Importantly here, TeamCity can generate and display NDepend reports using NDependConsole as Laurent Kempé explains here (using MSBuild though it’s just as possible with NAnt or Rake).

However I find even this report contains too much information  so we’ve modified the NDepend configuration file to show only four of the sections (see example) Assemblies Metrics, Assemblies Abstractness vs. Instability, Assemblies Dependencies Diagram and CQL Queries and Constraints. It’s now much easier to read. For example, we can see the assemby metrics at the top show us some of the same metrics used in the dashboards but broken down by each assembly. When this is integrated into TeamCity you merely need to click back to see how any of them have changed since previous builds.

PreviousReport

We can also see the method “WriteChange(…)” clearly needs some love being the top method to refactor. When you compare the two reports side by side it’s easy to see how, just like methods or classes with too many lines of code, too much information can make what are otherwise valuable reports unreadable. I have to admit it took me a long time to get into using NDepend well and a lot of that is down to the overwhelming amount of information it produces.

NCover

NCover

It’s no good finding out your test coverage has gone down if you don’t know why. You could pay for an NCover licence for each developer, but less costly is to integrate the NCover report into TeamCity. Again, Laurent Kempé explains how to do this here and here is an example of the NCover report for our Dasbhoards project.  It doesn’t provide the same amount of detail as the NCover GUI, but will at least give you a good head start in the right direction.

Tabs

So, in the end we have three tabs in our TeamCity project builds which, when used in conjuction with each other, give us a highly visual representation of how modifications are affecting the maintainability of our code. Of course there are many other reasons why code could be problematic but the context these these tools open up make it much easier for developers to learn and understand and therefore be able care more about the maintanability of their projects and the consequences of writing bad code.

Visualising the internal quality of software: Part 1

There are essentially two ways you can discuss the quality of software. External quality is something everyone can appreciate. For example, how easy it is for customers to use one of our products and whether they encounter any problems (bugs) whilst doing so. Internal quality however, is a lot more complex and tricky to get right, but is just as important as external quality (if not more so). To me, internal quality is about how efficiently you can add or modify functionality whilst not breaking any of the existing functionality, not just today but over a long period of time. This is also known as the cost of change.

Another well known term  is software entropy – the more you add to and change a piece software the more complex and difficult it becomes to do so. Eventually the cost of change simply becomes too high and the long and arduous task of replacing the system begins. Of course, doing this this has a massive impact on competitiveness as you’re unable to deliver new functionality until you’ve done so and why it’s so important to make the effort to keep your code in good condition.

In the new world of software development we’re all really keen on visualisation (or “information radiators” as we’re supposed to call them) and with good reason.  It aids conversation and helps identify any pain points so, as a team, we can focus on removing them. A while ago a former colleague and friend of mine Peter Camfield blogged about quality dashboards he and Josh Chisolm had been working on. We’ve recently implemented them at my current company as well. I’d like to go into more detail about the dashboards, some improvements I’ve made and how, in combination with other visualisation tools, we’re able to make the most of them.

trafficlights

The "traffic lights" for one of our applications

The dashboards are created by running NDepend CQL Queries* and NCover on the code and test assemblies for a project, aggregating the results, comparing them to previous recording and outputting them as a set of traffic lights with arrows to show whether the measurement has improved or worsened. The levels at which the traffic lights change colour are mostly based on recommendations from the NDepend Metrics Definition page.

TeamCityDashboardsWe’ve been running them in TeamCity (and added the output as a tab) any time code is checked in. Having lived with these dashboards for a while now I can say I’ve found them invaluable. They’ve raised countless discussions around how changes to the code base have impacted the metrics and really made us think about what good internal quality is. However it’s also been frustrating as until now they’ve only shown changes from one build to the next, so I’ve recently spent some time working on adding line charts (using the Google Chart API) to show how the metrics have changed over time:

line charts

Line charts for the same application as above

The line charts are configured so that “up is good”. They give an immediate (albeit subjective) view on whether the internal quality is improving. We’ve only had these for a few weeks and it will be really interesting to see how easy it is to get all the lines to go up or at least remain stable** and whether improvements in the metrics are reflected in the cost of change.

In part 2 I will talk about how we can use these reports in combination with other visualisations to help us understand how code modifications affect internal quality.

* It would take a long time to go into the detail of the CQL queries suffice to say they were chosen to try and give the broadest picture of the condition of the code without having so many that it just becomes a noise.
** From the small experience I’ve had so far I don’t think it’s going to be very easy.