Do less, better

It may seem paradoxical, but you often get more done by doing less, better.

Delivery slow 🐱? Expectations and deadlines regularly being missed 📅? All too often it’s because teams are just trying to do too much at the same time.

Think of a congested motorway, frequently caused by nothing more than too many vehicles trying to go too fast. Smart motorways solve this by reducing the speed limit, meaning everyone gets where they’re going more quickly.

Larger organisations trying to do too much can be very inefficient, further exacerbated by dependencies between teams and complex org structures creating competing initiatives, akin to a city gridlocked at rush hour.

Why better as well as less? You didn’t plan for misunderstanding requirements, didn’t plan for code becoming more complex and harder to change, didn’t plan to fail QA, didn’t plan for it to break something in production.

These are all things more likely to happen when you’re juggling too much work at the same time (and they all slow you down), but won’t necessarily improve by just doing less. If you’ve been working this way, bad practices are probably baked in culturally.

📉 Doing less 📉

⚠ Reduce and limit the amount of work in progress at any one time (see WIP limits)
đŸ”Ș Break work down into the smallest possible deliverables
✅ Focus on getting things done before picking up new work

🌟 Doing less, better 🌟

🎯 Ruthless prioritisation to ensure you’re always focusing on the work that will have the most impact
đŸ€ Make sure outcomes and requirements are clear and agreed before you start new work
đŸ‘©â€đŸ‘©â€đŸ‘Šâ€đŸ‘Š Take collective ownership and work as a team rather than passing tasks between silo’s
🔧 For engineers, focus on writing maintainable and well unit tested code, so it’s easy to change and reduces the likelihood of introducing bugs.

What practices have you found help create better focus, streamline delivery and get more done as a consequence?

Why you shouldn’t lose sleep over the existential threat of AI

There is no existential threat from Artificial Intelligence any time soon, despite what the headlines might have you believe, so I’m going to try and explain why.

Why are we hearing so much about it then? Fear, uncertainty, and doubt (FUD) make for great headlines, sell papers, and generate advertising revenue on podcasts. However, if you dig a little deeper, the substance beneath is considerably less sensational.

Understanding the current state of AI

AI as we know it today predominantly falls under what’s known as Machine Learning (ML). There are other concepts in use, but the vast majority – including those using Large Language Models (LLMs) like ChatGPT and MidJourney – are based on ML principles.

ML is learning in the very loosest sense. It’s intelligence in the very loosest sense.

Machine Training would, be a more accurate description. Algorithms are fed data, evaluated on their outputs, adjusted, and then fed more data until their results improve. This iterative process eventually creates models capable of some very impressive tasks. But they’re not ‘learning’ in the way we think of a child or a baby chimp discovering the world. They’re not generating new, novel insights or demonstrating any form of consciousness or understanding.

Artificial General Intelligence (AGI) is no more than theory

ML isn’t remotely close to the kind of intelligence that could theoretically pose an existential threat to humanity. The fundamentals of ML have been around for at least 40 years and it’s taken us that long to get to a point where it has genuine, widespread practical applications.

As for AGI, there are currently no accepted theories for how it could even be achieved. There are plenty of ideas, but they remain hypothetical. Could machines become genuinely intelligent? Possibly. But no one knows for sure.

Predictions of when the “Singularity” (the point at which artificial intelligence surpasses human intelligence) will arrive, are thus pure conjecture.

Ignore the FUD and focus on the real issues of AI

While ML-based AI is undeniably changing our lives, it is doing so in the same way computers have been since the invention of the pocket calculator. There are tasks at which computers already outperform us, like processing large amounts of data and performing complex calculations, but for the vast majority of what we consider to be human intelligence, they’re still light years away.

We’re no closer to a Terminator-style “Judgement Day” than when Alan Turing first started kicking around the idea of AI in the mid 20th Century.

That’s not to say AI doesn’t present us with challenges. Job displacement, privacy concerns, potential misuse, and inherent biases are real and pressing issues we need to address. We’d be better off focusing on these tangible problems rather than worrying about hypothetical existential threats posed by AGI. Let’s redirect our energy to making sure that our use of AI is responsible, ethical, and beneficial for all.

On confirmation bias

I grow more convinced each day that one of our biggest battles, in our organisations and even society as a whole, is with Confirmation Bias.

Confirmation Bias is when we unconsciously look for, interpret, and remember information that backs up our own beliefs or values, and downplay information that doesn’t.

It’s all around us and has likely grown worse with the rise of social media. We create “filter bubbles” by following only what we like, and recommendation algorithms make this even easier to do.

Recent events like Covid, Brexit, and even Twitter’s rate limiting over the last few days, show how people selectively use information to back our view.

This also happens at work, especially in “them and us” cultures between teams. A common example I see is between commercial and development teams:

Development says, “Commercial sell new features without asking, make unreasonable demands, and don’t care about tech.”

Commercial says, “Development take too long, only care about the tech stuff and don’t care about the business being successful.”

In both cases, we tend to amplify the information that backs our view and ignore what doesn’t. This makes our biases stronger and the “them and us” gap bigger, which hurts open communication and cooperation (let alone being an unpleasant working environment).

So, what can we do?

First, have some humility. Realise that YOU are just as likely to be vulnerable to Confirmation Bias as anyone else. We like to think we’re more self-deterministic than others. We’re not. Get over it!

Second, show some empathy. Put yourself in the other person’s shoes. Engage positively and with an open mind. It’s amazing how many times I’ve had an “Ah ha” moment, and even apologised for how I acted when I better understood their view point. Crucially, this also builds trust, which is vital in being able to work together to solve problems.

Lastly, burst your filter bubbles. Follow and read viewpoints you disagree with as well as ones you do. Be careful about opinions that don’t have evidence to back them up. And check that the evidence is reliable.

Challenging our biases can be tough, but it’s worth it. By doing so, we build stronger connections, foster better communication, and create more collaborative environments. And who knows, we might even change our minds along the way!

Quality is a team sport

I think it was Jamie Arnold who first introduced me to this phrase.

In engineering teams it’s – sadly – still all too common that Quality Assurance (QA) is the last step in the delivery process. Developers code, then throw it over the wall to QAs to test. Teams working this way typically have a high rate of failure and large release bottlenecks – features and releases pile up, waiting on the QAs. Developers pick up more new work whilst they’re waiting. Bugs come back and developers are now juggling bug fixes and new work.

It’s slow, inefficient and costly!

What I dislike the most is the cultural aspect – the implication that quality is the responsibility of QAs, not the developers who wrote the code.

Quality is a team sport. The most valuable role for QAs* is to ensure quality is baked into the entire end-to-end delivery process. This has become known as “shift-left” – QAs moving away from spending all their time at the end of the delivery lifecycle and focusing more on how we can “build quality in” throughout.

What does this look like in practice?

– QAs involved in requirements gathering and definition, making sure requirements are clear, well understood and we’ve considered how we’re going to test it (inc. automated tests).

– QAs ensure we’re following our agreed Software Delivery Lifecycle (SDLC) and the steps and control we have in place to ensure quality is front of mind.

– QAs collaborate with developers to write automated tests, developers collaborate with QAs on mutation testing, compatability testing, performance testing.

– If there’s any manual testing required, /everyone gets involved/. QAs make sure everyone in the team is capable of doing it.

It’s a much richer role for QAs, and far better for everyone!

Fewer, better, people

This is something I said in a talk on high performing teams recently that resonated with a few folks.

In my experience the most effective teams are small, between 3-5 members, and the most effective organisations are the ones that manage to stay small overall.

Why might this be? Fewer people streamlines communication: a 3-member team has 3 channels, a 5-member one has 10. It rises exponentially with every person you add.

In small teams, alignment is more organic. Greater shared understanding fosters greater autonomy and more informed decision making.

“Better” is not just about technical expertise. Behaviours are just as important, if not more so (teamwork, communication, adaptability and so on).

In a high performing team, the whole is greater than the sum of its parts.

With an underperforming team adding more people will most likely slow things down (it may not look like at first because everyone is “busy”, but it will).

How can you stay small? Do less, better.

Teams that write unit tests go faster

In the fast-paced world of start-ups, it’s common to overlook the importance of writing unit tests*. With limited resources and short timescales, many consider it a luxury rather than a necessity (if they even consider them at all).

The long-term impact is far from marginal – if you’re lucky enough to start getting to scale, you’ll regret not investing in them early on.

However, even in the short term, writing unit tests can speed up your delivery. Here’s how 👇

🐛 Bug Reduction

Unit tests enable teams to catch bugs early, before they reach production. This not only improves user experience, but also saves time and resources in testing, debugging and hotfixes.

⚡ Quicker Changes

Good unit tests encourage modular, less complex code. This makes it easier to implement changes and add new features. Furthermore, unit-tested code acts as its own documentation, reducing the time needed to understand how the code works.

🔄 Frequent Releases

With a solid suite of tests in place, the risk associated with each release decreases. Developers and stakeholders gain confidence that the new changes haven’t broken existing functionality, enabling more frequent releases and quicker feature rollout.

đŸ‘„ Fewer People (& Cost)

Unit tests reduce the number of people required overall. Less resource needed for manual testing and debugging, and a lower overall cost of change & maintenance as a result of the tests encouraging less complex code.

đŸŒ± Why Early-Stage Start-ups Should Care

Many early-stage start-ups don’t invest in unit tests, especially when the development team is small. However, as the team grows, their absence becomes increasingly detrimental. Adding unit tests retroactively can be a herculean task, particularly if your codebase has already turned into spaghetti.

In summary, unit testing is not just a “nice-to-have”; it’s a strategic advantage. Even if you’re working with a lean team, the benefits far outweigh the initial time investment. The sooner you start, the faster you’ll go.

Avoid the messy middle with hybrid working

Either have regular set office team days or choose fully remote, but avoid the messy middle of “come in when you need to”

Firstly a few disclaimers: This article doesn’t intend to compare or argue the merits of either fully remote working or co-location/hybrid. Secondly, these are my views and not those of my employer or any other organisation.

As things are gradually getting back to normal, many organisations are formalising their hybrid working policies. Some are choosing to have set office days – including my current organisation where our Product & Tech leadership (which I’m part of) made the decision to take this approach early 2021. You can read about our rational here. It’s not long and saves me repeating it in this article.

Others are taking the “come in when you need to” approach. This generally means if a team needs some face time for activities that benefit from in person interaction, they arrange to come into the office together. Otherwise do what suits you (the individual) best.

Why “come in when you need to” is the messy middle

When I try and arrange to meet up with friends who are now spread across the country (or even old work colleagues locally for that matter) it’s a military effort finding a time when everyone is free. Usually we have to schedule months out in advance. Even then things fall through as often as not.

I’m hearing similar stories from organisations currently taking the “come in when you need to” approach – teams finding it a struggle to get everyone together in person at the same time, especially when they’ve now hired people geographically further out from their offices. I’d imagine this gets exponentially harder when you want to get a few teams together who are, for example, collaborating on a shared outcome.

For working parents I see it as a particular issue. Most parents I know don’t have five full (i.e. 8am-6pm) working days of childcare (due to the expense). It’s usually a mix of paid childcare, grandparents and then parents splitting shifts on drop-offs and pick-ups (thankfully at my current employer we have flexible working hours which allows you to do this). In my experience at least, it’s a highly disciplined and drilled exercise, needs a routine and is difficult to change on a whim.

“Hey, why don’t we all go into the office tomorrow and workshop it in person?”

What if you’re the only parent on your team and now feel like the “difficult” one because you can’t just always change your routine that quickly? Now they’re all going in anyway and you’re missing out?

From an HR perspective, it’s ambiguous and difficult territory. What if someone says they won’t come in? Is that a conduct issue? When you have set days for everyone it’s pretty simple, they’re the same polices you had pre-hybrid working for those days. In the “come in when you need to” world you’re going to require a very clear definition of “need” and most likely, new and complicated colleague policies.

I’ve a strong suspicion (I know it’s the case in some places) quite a few organisations taking the “come in when you need to” approach aren’t doing so because they see it’s working, but because they’re still in wait and see mode. Why? Primarily I’d guess, fear of attrition in a highly competitive labour market. I predict that as more organisations come out formally with set office day working patterns, others will follow.

Choose set days or go fully remote?

Like I said in my disclaimer, I’m not out to argue the case for either remote or co-location here, just highlighting the situation I see as the worst of both worlds. If you can’t see set days working for your organisation, then perhaps it’s worth looking into whether fully remote is a better option.

Line management in Agile Teams

Line management is currently on my mind as I’ve moved to a new company (VP Engineering, team of 30+ people). Coincidently it’s also something I’ve recently been asked about by a peer in a similar position. Modern management practices tend to frown on line management as it smacks of traditional organisational structures. However – out with line management tends to go any formal pastoral care for staff as well as inexperienced or unqualified people getting left to deal with complicated situations with little or no guidance.

Below is advice based on my experiences. I’m happy to answer any questions, but I don’t present anything here as a shining example of good or bad (“where’s the Holacrocy dude?”), just stuff that has worked well for me over the years.

Everyone needs good guidance
I wrote this article on Roles & Responsibilities in Software Teams over 5 years ago and have used these effectively in 3 companies now. I find it really helps for everyone to be clear on what’s expected of them, it certainly makes the line managers job easier to have something which defines positive (and negative) behaviour.

Team Lead as line manager for the team
In my roles above, the Team Lead is basically line manager for the team. Their most important line management duties is regular 1-2-1s with their team to make sure everyone’s happy and productive and catch any situations or issues arising quickly. The Team Lead will also deal with team related line management issues, such as approving holiday, work from home requests etc.

The CTO/VP/Director/Head of Dev (i.e. me) will have more frequent 1-2-1s with the Team Leads than other team members so there’s a good feedback loop and any issues can quickly get escalated if needed.

Team Lead != Lead Developer
I intentionally separated these roles of Team Lead and Lead Developer, as being good technically does not make you a good people/line manager (see the Peter Principle). In many teams I’ve looked after the same person holds both roles, but not always.

Ultimately I’m the Line Manager though…
When it comes to more substantial issues such as anything requiring expenditure (e.g. pay increase requests, training) or performance issues that the Team Lead cannot solve himself (e.g. when you’re getting near the realm of disciplinary proceedings) that’s where I will take over from/support the Team Lead. Ideally a team can work through most of it’s issues, but not always.

Line managers need good guidance and training
Looking after people comes more intuitively to some than others, but either way it is a discipline people need training and guidance on – how to give good feedback is a great example as are good listening skills. I make an effort to mentor Team Leads in my 1-2-1s with them, but it’s good to have wider organisational initiatives too.

Pay reviews and performance appraisals
I’ve written up about my experiences with pay, performance and feedback previously. I consider regular 1-2-1s (with Team Lead and myself) to take the place of annual performance appraisals. However most companies still do pay reviews annually, which means some form of annual pay review meeting is required. As something I’d consider a more substantial line management issues I personally take responsibility for those pay review meetings with all my staff.

 

 

7digital Development Team Productivity Report 2013

Last year (2012) I published data on the productivity of our development team at 7digital.

I completed the productivity report for this year and would again like to share this with you. We’ve now been collecting data from teams for over 4 years with just under 4,000 data points collected over that time. This report is from April 2012 to April 2013.

New to this year is data on the historical team size (from January 2010), which has allowed us to look at the ratio of items completed to the size of the team and how the team size compares to productivity. There’s also some analysis of long term trends over the entire 4 years.

In general the statistics are very positive and show significant improvements in all measurements against the last reported period:

  • a 31% improvement in Cycle Times for all work items
  • a 43% improvement in Cycle Times for Feature work
  • a 108% increase in Throughput for all work items
  • a 54% increase in Throughput for Feature work
  • a 103% improvement in the ratio of Features to Production Bugs
  • a 56% increase in the amount of Items completed per person per month
  • a 64% increase in the amount of Features completed per person per month

DevTeamPerformanceReportApr12Apr13 (pdf)

The report includes lots of pretty graphs and background on our approach, team size and measurement definitions.

A brief summary of the last 4 years:

  • Apr09-Apr11* Cycle Time improved (but not Throughput or Production Bugs)
  • Apr11-Apr12 Throughput & Cycle Time improved (but not Production Bugs)
  • Apr12-Apr13 All three measurements improved!

*The first productivity report collated 2 years’ worth of data.

It’s really pleasing to see we’re finally starting to get a handle on Production Bugs and generally things continuing to improve. It’s interesting to see this pattern for improvement. We haven’t got any particularly good explanation for why things happened in that order and curious if other organisations have seen similar patterns or had different experiences. We’d expect it varies from organisation to organisation as the business context has a massive influence. 7digital is no different from any other organisation in that you have to be able to balance short term needs against long term goals. If anything else our experiences just further support the fact that real change takes time.

We must add the caveat that these reports do not tell us whether we’re working on the right things, in the right order or anything else really useful! They’re just statistics and ultimately not a measure of progress or success. However we’re strong believers in the concept that you’ve got to be able to “do it right” before you can “do the right thing”, supported by the study by Shpilberg et al, Avoiding the Alignment Trap in IT.

We hope you find this information useful and can help other teams justify following best practices like Continuous Delivery in their organisations. We would of course be interested in any feedback or thoughts you have. Please contact me via twitter:@robbowley or leave a comment if you wish to do so.

Pay, performance and feedback – an experience report (and where we are now)

I’ve written up an experience report on my recent adventures trying to improve the way we do pay reviews (it’s more interesting than you might think).

Like many companies we’ve been struggling with a problematic pay review process. In our case the feedback mainly revolved around it feeling arbitrary and lacking transparency. Around the time we were discussing this the Valve Handbook got posted, within which it talked about their peer review & stack ranking system:

“We have two formalized methods of evaluating each other: peer reviews and stack ranking. Peer reviews are done in order to give each other useful feedback on how to best grow as individual contributors. Stack ranking is done primarily as a method of adjusting compensation. Both processes are driven by information gathered from each other—your peers.”

Awesome, you get rated by your peers rather than a manager or HR person, who has no idea what you do (not that we did that anyway)! I liked this idea a lot and got to work on doing our own version. I started with a trial peer review survey with one team. There were some positives, but it mostly went down badly. People really didn’t like the stack ranking and also that I only asked a few high level questions with the answer being a score out of 10. So we went back to the drawing board. We got representatives from all our teams and held 4/5 sessions where we broke down the larger themes (Skill, Productivity, Communication, Team) into more detailed and objective questions. After a lot of persistence and effort we finally put this all together and we had the survey! Which I promptly canned…

Trying to measure performance

The fundamental problem I was (naĂŻvely) trying to solve with a peer review survey was to bring in a degree of measurement, which would hopefully mean people felt the pay review process had a quantifiable aspect and didn’t just come down to one person’s opinion. However we were getting into the terrain of incentivising our people based on individual optimisations (rather than organisational or team goals & objectives) and – most disturbingly – the anonymous feedback aspect just felt very wrong. It was and is completely contrary to our culture and the things we stand for. The trade-offs simply weren’t worth it. Regardless of the unpleasantness of anonymous feedback, everywhere I’ve heard of using ranking/measurement schemes have really bad stories to tell, such as Microsoft and GE. Warning signs everywhere.

Don’t mix pay reviews with feedback

Another problem is the survey would have been a kind of feedback mechanism. Imagine getting your results – all nicely presented in bar charts – and finding you scored really badly on one section. What the heck are you supposed to do with that?! I’m a really bad communicator? What do they mean by that? Who thinks that? Great, everyone thinks I’m rubbish but I’ve got no way of finding out why apart from going around everyone and asking them. Ouch! I am a big believer in regular 1-2-1s (I’ve talked about them a bit here). As Head of Development I start every day with a 1-2-1 with one of my department (I see all 35+ people as regularly as I can). Each team also does 1-2-1s (usually with their Lead if they have one). Each new joiner gets a mentor who they have a monthly 1-2-1 with for their first 3-6 months. By the time you get to a pay review their should be no surprises, no feedback that you haven’t already heard before.

Where we are now

Our latest attempt is heavily based (& in some parts very plagiarised, I have to admit) on the StackExchange compensation scheme. The peer review survey wasn’t a complete waste. I adapted the themes and questions from the survey we built into a set of core values we desire from our colleagues to be used for guidance. This is still very new and as yet unproven, but it certainly feels a lot better than where heading previously. I could explain in more detail, but it would be duplicating what I’ve said in the document/guide, which you can download here: 7digital Dev & DBA Team Compensation

Finally, a word on annual performance appraisals

I’ve only been employed by one company who ran annual performance appraisals and was far from impressed. It’s something we’ve consciously avoided for our tech teams at 7digital. I could go into more details as to why they are wrong, but others much more qualified than me have already done so: