Some things that might help you make better software | David R. MacIver

http://www.drmaciver.com/2016/10/some-things-that-might-help-you-write-better-software/

I’ve argued before that most software is broken because of economic reasons. Software which is broken because there is no incentive to ship good software is going to stay broken until we manage to change those incentives. Without that  there will be no budget for quality, and nothing you can do is going to fix it.

But suppose you’re in the slightly happier place where you do have budget for quality? What then? What can you do to make sure you’re actually spending that budget effectively and getting the best software you could be getting out of it?

I don’t have an easy answer to that, and I suspect none exists, but I’ve been doing this software thing for long enough now that I’ve picked up some things that seem to help quality without hurting (and ideally helping) productivity. I thought it would be worth writing them down.

Many of them will be obvious or uncontroversial, but if you’re already doing all of them then your team is probably doing very well.

This is all based somewhat on anecdote and conjecture, and it’s all coloured by my personal focuses and biases, so some of it is bound to be wrong. However I’m pretty sure it’s more right than wrong and that the net effect would be strongly positive.

Without further ado, here is my advice.

Attitude

If you do not care about developing quality software you will not get quality software no matter what your tools and processes are designed to give you.

This isn’t just about your developers either. If you do not reward the behaviour that is required to produce quality software, you will not get quality software. People can read their managers’ minds, and if you say you want quality software but reward people for pushing out barely functioning rubbish, people are smart enough to figure out you don’t really mean that.

Estimated cost: Impossible to buy, but hopefully if you’re reading this article you’re already there. If you’re embedded in a larger context that isn’t, try creating little islands of good behaviour and see if you can bring other people around to your way of thinking.

Estimated benefit: On it’s own, only low to moderate – intent doesn’t do much without the ability to act – but it is the necessary precursor to everything else.

Controversy level: Probably most people agree with this, although equally it doesn’t feel like most people implement this. I imagine there are some people who think they can fix this problem if they just find the right process. Maybe they’re right, but I’ve never seen something even approaching such a process.

Automated Testing

Obviously I have quite a few thoughts about automated testing, so this section gets a lot of sub headings.

Continuous Integration

You need to be running automated tests in some sort of CI server that checks every ostensibly releasable piece of software and checks whether it passes the tests.

If you’re not doing this, just stop reading this article and go set it up right now, because it’s fundamental. Add a test that just fires up your website and requests the home page (or some equivalent if you’re not writing a website). You’ve just taken the first and most important step on the road from throwing some crap over the wall and seeing if anyone on the other side complains about it landing on them to actual functional software development.

Estimated cost: Hopefully at worst a couple days initial outlay to get this set up, then small to none ongoing.

Estimated benefit: Look, just go do this already. It will give you a significant quality and productivity boost.

Controversy level: It would be nice to think this was uncontroversial. It’s certainly well established best practice, but I’ve worked at companies that don’t do it (a while ago), and a friend basically had to ramrod through getting this implemented at the company they’d joined recently.

Local Automated Testing

You need to be able to run a specific test (and ideally the whole test suite) against your local changes.

It doesn’t really matter if it runs actually on your local computer, but it does matter that it runs fast. Fast feedback loops while you work are incredibly important. In many ways the length of time to run a single test against my local changes is the biggest predictor of my productivity on a project.

Ideally you need to  be able to select a coherent group of tests (all tests in this file, all tests with this tag) and run just those tests. Even better, you should be able to run just the subset of whatever tests you ask to run that failed last time. If you’re using Python, I recommend py.test as supporting these features. If you’re currently using unittest you can probably just start using it as an external runner without any changes to your code.

Estimated cost: Depends on available tooling and project. For some projects it may be prohibitively difficult (e.g. if your project requires an entire hadoop cluster to run the code you’re working on), but for most it should be cheap to free.

Estimated benefit: Similarly “look, just go do this already” if you can’t run a single test locally. More specific improvements will give you a modest improvement in productivity and maybe some improvement in quality if they make you more likely to write good tests, which they probably will.

Controversy level: Not very. I’ve only worked at one company where running tests locally wasn’t a supported workflow, and I fixed that, but workflows which support my slightly obsessive focus on speed of running a single test are rarely as good as I’d like them to be.

Regression Testing

The only specific type of automated testing that I believe that absolutely everybody should be doing is regression testing: If you find a bug in production, write a test that detects that bug before you try to fix the bug. Ideally write two tests: One that is as general as possible, one that is as specific as possible. Call them an integration and a unit test if that’s your thing.

This isn’t just a quality improvement, it’s a productivity improvement. Trying to fix bugs without a reproducible example of that bug is just going to waste your time, and writing a test is the best way to get a reproducible example.

Estimated cost: Zero, assuming you have local testing set up. This is just what you should be doing when fixing bugs because it’s better than the other ways of fixing bugs – it will result in faster development and actually fixed bugs.

Estimated benefit: Depends how much time you spend fixing bugs already, but it will make that process faster and will help ensure you don’t have to repeat the process. It will probably improve quality somewhat by virtue of preventing regressions and also ensuring that buggier areas of the code are better tested.

Controversy level: In theory, not at all. In practice, I’ve found many to most developers need continual reminders that this is a thing you have to do.

Code Coverage

You should be tracking code coverage. Code coverage is how you know code is tested. Code being tested is how you know that it is maybe not completely broken.

It’s OK to have untested code. A lot of code isn’t very important, or is difficult enough to test that it’s not worth the effort, or some combination of the two.

But if you’re not tracking code coverage then you don’t know which parts of your code you have decided you are OK with being broken.

People obsess a lot about code coverage as a percentage, and that’s understandable given that’s the easiest thing to get out of it, but in many ways it’s the least important part of code coverage. Even the percentage broken down by file is more interesting that, but really the annotated view of your code is the most important part because it tells you which parts of your system are not tested.

My favourite way to use code coverage is to insist on 100% code coverage for anything that is not explicitly annotated as not requiring coverage, which makes it very visible in the code if something is untested. Ideally every pragma to skip coverage would also have a comment with it explaining why, but I’m not very good about that.

As a transitional step to get there, I recommend using something like diff-cover or coveralls which let you set up a ratcheting rule in your build that prevents you from decreasing the amount of code coverage.

Estimated cost: If your language has good tooling for coverage, maybe a couple hours to set up. Long-term, essentially free.

Estimated benefit: On its own, small, but it can be a large part of shifting to a culture of good testing, which will have a modest to large effect.

Controversy level: Surprisingly high. Of the companies I’ve worked at precisely zero have tracked code coverage (in one case there was a push for it but younger me argued against it. My opinions on testing have changed a lot over the years).

Property-based Testing

Property-based testing is very good at shifting the cost-benefit ratio of testing, because it somewhat reduces the effort to write what is effectively a larger number of tests and increases the number of defects those tests will find.

I won’t write too much about this here because I have an entire separate site about this.

Estimated cost: If you’re using a language with a good property based testing tool, about 15 minutes to install the package and write your first test. After that, free to negative. If you’re not, estimated cost to pay me to port Hypothesis to a new language is around £15-20k.

Estimated benefit: You will find a lot more bugs. Whether this results in a quality improvement depends on whether you actually care about fixing those bugs. If you do, you’ll see a modest to large quality improvement. You should also see a small to modest productivity improvement if you’re spending a lot of time on testing already.

Controversy level: Not very high, but niche enough that most people haven’t formed an opinion on it. Most people think property based testing is amazing when they encounter it. Some push back on test speed and non-determinism (both of which have partial to complete workarounds in Hypothesis at least)

Manual Testing

At a previous job I found a bug in almost every piece of code I reviewed. I used a shocking and complicated advanced technique to do this: I fired up the application with the code change and tried out the feature.

Manual testing is very underrated. You don’t have to have a dedicated QA professional on your team to do it (though I suspect it helps a lot if you do), but new features should have a certain amount of exploratory manual testing done by someone who didn’t develop them – whether it’s another developer, a customer service person, whatever. This will find both actual bugs and also give you a better idea of its usability.

And then if they do find bugs those bugs should turn into automated regression tests.

Estimated cost: It involves people doing stuff on an ongoing basis, so it’s on the high side because people are expensive, but it doesn’t have to be that high to get a significant benefit to it. You can probably do quite well with half an hour of testing of a feature that took days to develop. This may also require infrastructure changes to make it easy to do that can have varying levels of cost and difficulty, but worst case scenario you can do it on the live system.

Estimated benefit: You will almost certainly get a moderate quality improvement out of doing this.

Controversy level: Having QA professionals seems to be entirely against the accepted best practice in startups. The rest, similar to regression testing: Doing a bit of manual testing seems to be one of those things where people say “Of course we do that” and then don’t do it.

Version Control

You need to be using a version control system with good branching and merging. This is one of the few pieces of advice where my recommendation requires making a really large change to your existing workflow.

I hope that it’s relatively uncontroversial that you should be using version control (not everybody is!). Ideally you should be using good version control. I don’t really care if you use git, mercurial, fossil, darcs, whatever. We could get into a heated argument about which is better but it’s mostly narcissism of small differences at this point.

But you should probably move off SVN if you’re still on it and you should definitely move off CVS if you’re still on it. If you’re using visual source safe you have my sympathies.

The reason is simple: If you’re working on a team of more than one person, you need to be able to incorporate each other’s changes easily, and you need to be able to do that without trashing your own work. If you can’t, you’re going to end up wasting a lot of your time.

Estimated cost: Too project dependent to say. Importer tools are pretty good, but the real obstacle is always going to be the ecosystem you’ve built around the tools. At best you’re going to have a bad few weeks or months while people get used to the new system.

Estimated benefit: Moderate to large. Many classes of problems will just go away and you will end up with a much more productive team who find it much easier to collaborate.

Controversy level: Basically uncontroversial. Not as widespread as you might imagine, but not controversial. Once git started becoming popular basically everywhere I’ve worked used it (with one exception for mercurial and one exception for Google’s interesting perforce ish based system).

Monorepos

Use a single repository for all your code.

It’s tempting to split your projects into lots of small repos for libraries and services, but it’s almost always a bad idea. It significantly constrains your ability to refactor across the boundary and makes coordinating changes to different parts of the code much harder in almost every way, especially with most standard tooling.

If you’re already doing this, this is easy. Just don’t change.

If you’re not, just start by either creating or designating an existing repository as the monorepo and gradually move the contents of other repos into it as and when convenient.

The only exception where you probably need to avoid this is specific projects you’re open sourcing, but even then it might be worth developing them in the monorepo with some sort of external repo.

This point has proved controversial, so if you’re still unconvinced I have written a longer advocacy piece on why you should use a monorepo.

Estimated costs: Too project dependent to say, but can be easily amortised over time.

Estimated benefits: Every time you do something that would have required touching two repos at once, your life will be slightly easier because you are not paying coordination costs. Depends on how frequent that is, but experience suggests it’s at least a modest improvement.

Controversy level: High. This piece of advice is extremely love/hate. I think most of the people who love it are the ones who have tried it at least once and most of the people who hate it are those who haven’t, but that might be my biases speaking. It’s been pretty popular where I’ve seen it implemented.

Static Analysis

I do not know what the right amount of static analysis is, but I’m essentially certain that it’s not none. I would not be surprised to learn that the right amount was quite high and includes a type system of some sort, but I don’t know (I also would not be surprised to discover that it was not). However even very dynamic languages admit some amount of static analysis and there are usually tools for it that are worth using.

I largely don’t think of this as a quality thing though. It’s much more a productivity improvement.  Unless you are using a language that actively tries to sabotage you (e.g. C, JavaScript), or you have a really atypically good static analysis system that does much more work than the ones I’m used to (I’m actually not aware of any of these that aren’t for C and/or C++ except for languages with actual advanced type systems), static analysis is probably not going to catch bugs much more effectively than a similar level of good testing.

But what it does do is catch those bugs sooner and localise them better. This significantly improves the feedback loop of development and stops you wasting time debugging silly mistakes.

There are two places that static analysis is particularly useful:

  1. In your editor. I use syntastic because I started using vim a decade ago and haven’t figured out how to quit yet, but your favourite editor and/or IDE will likely have something similar (e.g. The Other Text Editor has flycheck). This is a really good way of integrating lightweight static analysis into your workflow without having to make any major changes.
  2. In CI. The ideal number of static analysis errors in your project is zero (this is true even when the static analysis system has false positives in my opinion, with the occasional judicious use of ‘ignore this line’ pragmas), but you can use the same tricks as with code coverage to ratchet them down to zero from wherever you’re starting.

Most languages will have at least a basic linting tool you can use, and with compiled languages the compiler probably has warning flags you can turn on. Both are good sources of static analysis that shouldn’t require too much effort to get started with.

Estimated cost: To use it in your editor, low (you can probably get it set up in 10 minutes). To use it in your CI, higher but still not substantial. However depending on the tool it may require some tuning to get it usable, which can take longer.

Estimated benefit: Depends on the tool and the language, but I think you’ll get a modest productivity boost from incorporating static analysis and may get a modest to large quality boost depending on the language (in Python I don’t think you’ll get much of a quality benefit. In C I think you’ll get a huge one even with just compiler warnings).

Controversy level: Varies entirely depending on level of static analysis. Things that you could reasonably describe as “linting” are low. Things that require something closer to a type system much higher. Tools with a high level of false positives also high. You can definitely find an uncontroversial but still useful level of static analysis. I’ve seen it at a moderate subset of the companies I worked for.

Production Error Monitoring

You should have some sort of system that logs all errors in production to something more interactive than a log file sitting on a server somewhere. If you’re running software locally on end users’ computers this may be a bit more involved and should require end user consent, but if you’re writing a web application we’re all used to being pervasively spied on in everything we do anyway so who cares?

I’ve used and like Sentry for this. There are other options, but I don’t have a strong opinion about it.

Estimated cost: Depends on setup, but getting started with sentry is easy and it doesn’t cost a particularly large amount per month (or you can host the open source version for the cost of a server).

Estimated benefit: Much better visibility of how broken your software is in production is the best thing for making your software less broken in production. It will also speed up your debugging process a lot when you do have production errors to fix, so it’s probably a net win in productivity too if you spend much time debugging production errors (and you probably do).

Controversy level: Low but surprisingly it’s not nearly as widely implemented as you might expect. Another thing that is becoming more common though I think.

Assertions

I am a big fan of widespread use of assertions, and of leaving them on in production code.

The main reason for this is simple: The single biggest factor in ease of debugging is making sure that the point at which the error is reported is as close as possible to the point at which the error occurs. Assertions are a very good way to do this because they turn a failure of understanding into a runtime error: If your code is not behaving in a way you’d expect, that becomes an error immediately, and it is much easier to debug than finding the downstream thing that actually went wrong at some point later.

It also has a huge benefit when doing property-based testing, because they greatly increase the scope of properties tested – problems that might not have been noticed by the explicit test become much more visible if they trigger an assertion failure.

Input validation while technically not an assertion also has the same effect – a function which checks its arguments rather than silently doing the wrong thing when given a wrong argument will be significantly easier to debug.

John Regehr has a good post on the care and feeding of assertions that I recommend reading further.

Estimated cost: Low if you just start adding them in as you develop and edit code. Requires a bit of careful thinking about what the code is doing, but that’s no bad thing.

Estimated benefit: Modest. This won’t be life changingly good, but I have frequently been grateful for a well placed assertion in my code preventing what would otherwise be a much more confusing bug.

Controversy level: People don’t really seem to have an opinion on this one way or another, but it’s not a common habit at all. I’ve not seen it be widespread at any company I’ve worked for.

Code Review

I think all projects with > 1 person on them should put all code changes through code review.

Code review seems to be a fairly cost effective defect finding tool according to the literature. I previously believed this not to be the case, but I’ve done some reading and I changed my mind.

But regardless of whether you find defects, it will ensure two  very important things:

  1. At least one other person understands this code. This is useful both for bus factor and because it ensures that you have written code that at least one other person can understand.
  2. At least one other person thinks that shipping this code is a good idea. This is good both for cross checking but also because it forces you to sit down and think about what you ship. This is quite important: Fast feedback loops are good for development, but slow feedback loops for shipping make you pause and think about it.

Over time this will lead to a significantly more maintainable and well designed piece of software.

Estimated cost: You need to get a code review system set up, which is a modest investment and may be trivial. I can’t really recommend anything in this space as the only things I’ve used for this are Github and proprietary internal systems. Once you’ve got that, the ongoing cost is actually quite high because it requires the intervention of an actual human being on each change.

Estimated benefit: It’s hard to say. I have never been part of a code review process that I didn’t think was worth it, but I don’t have a good way of backing that up with measurements. It also depends a lot on the team – this is a good way of dealing with people with different levels of experience and conscientiousness.

Controversy level: Fairly uncontroversial, though at least amongst small companies it used to be weird and unusual. At some point in my career it went from “nobody does this” to “everybody does this”. I think a combination of GitHub pull requests and acknowledgement that most of the cool big companies do it seems to have taken this from a niche opinion to widespread practice in a remarkably short number of years.

Continuous Delivery

Another part of localising things to when they went wrong is that ideally once something has passed code review it will ship as soon as possible. Ideally you would ship each change as its own separate release, but that isn’t always practical if you’re e.g. shipping client side software.

This helps ensure that when something goes wrong you have a very good idea of what caused it because not that much changed.

Another important part of this is that when a release goes out you should always be able to roll it back easily. This is essential if you want to make releasing low cost, which is in turn essential for having this sort of frequent release.

A thing I have never worked with personally but have regarded with envy is staged roll out systems which first roll out to a small fraction of the customer base and then gradually ratchet up until it reaches 100%, rolling back automatically or semi-automatically if anything seems to have gone wrong in the process.

Estimated cost: The transitional period from infrequent to frequent deliveries can be a bit rough – you’ll need to spend time automating manual steps, looking for inefficiencies, etc. Take baby steps and gradually try to improve your frequency over time and you can spread this out fairly easily though.

Estimated benefit: A modest quality improvement, and quite a large improvement in debugging time if you currently end up with a lot of broken releases. The release process changes you have to make to make this work will probably also be a significant net time saver.

Controversy level: I’m not sure. It hasn’t seemed that controversial where I’ve seen it implemented, but I think larger companies are more likely to hate it.

Auto formatting and style checking

Code review is great, but it has one massive failure mode. Consider Wadler’s law:

In any language design, the total time spent discussing a feature in this list is proportional to two raised to the power of its position. 0. Semantics 1. Syntax 2. Lexical syntax 3. Lexical syntax of comments

Basically the same thing will happen with code review. People will spend endless time arguing about style checking, layout, etc.

This stuff matters a bit, but it doesn’t matter a lot, and the back and forth of code review is relatively expensive.

Fortunately computers are good at handling it. Just use an auto-formatter plus a style checker. Enforce that these are applied (style checking is technically a subset of static analysis but it’s a really boring subset and there’s not that much overlap in tools).

In Python land I currently use pyformat and isort for auto-formatting and flake8 for style checking. I would like to use something stronger for formatting – pyformat is quite light touch in terms of how much it formats your code. clang-format is extremely good and is just about the only thing I miss about writing C++. I look forward to yapf being as good, but don’t currently find it to be there yet (I need to rerun a variant on my bug finding mission I did for it last year at some point). gofmt is nearly the only thing about Go I am genuinely envious of.

Ideally you would have your entire project be a fixed point of the code formatter. That’s what I do for Hypothesis. If you haven’t historically done that it can be a pain though. Many formatting tools can be applied based on only the edited subset of the code. If you’re lucky enough to have one of those, make that part of your build process and have it automatically enforced.

Once you have this, you can now institute a rule that there should be no formatting discussion in code review because that’s the computer’s job.

Here’s a great post from GDS about this technique and how it’s helped them.

Estimated cost: Mostly tool dependent, but if you’re lucky it’s basically free. Also some social cost – some people really dislike using style checkers (and to a lesser degree auto-formatters) for reasons that don’t make much sense to me. I personally think the solution is for them to get over it, but it may not be worth the effort of fighting over it.

Estimated benefit: From the increased consistency of your code, small but noticeable. The effect on code review is moderate to large, both in terms of time taken and quality of review.

Controversy level: Surprisingly high. Some people really hate this advice. Even more people hate this advice if you’re not running a formatter that guarantees style conforming code (e.g. I’m not on Hypothesis because none of the Python code formatters can yet). I’ve only really seen this applied successfully at work once.

Documentation in the Repository

You should have a docs section in your repository with prose written about your code. It doesn’t go in a wiki. Wikis are the RCS of documentation. We’ve already established you should be using good version control and a monorepo, so why would you put your documentation in RCS?

Ideally your docs should use something like sphinx so that they compile to a (possibly internally hosted) site you can just access.

It’s hard to keep documentation up to date, I know, but it’s really worth it. At a bare minimum I think your documentation should include:

  • Up to date instructions for how to get started developing with your code
  • Detailed answers to questions you find yourselves answering a lot
  • Detailed post-mortems of major incidents with your product

For most projects they should also include a change log which is updated as part of each pull request/change list/whatever.

It may also be worth using the documentation as a form of “internal blogging” where people write essays about things they’ve discovered about the problem domain, the tools you’re using or the local style of work.

Estimated cost: Low initial setup. Writing documentation does take a fair bit of time though, so it’s not cheap.

Estimated benefit: This has a huge robustness benefit, especially every time your team changes structure or someone needs to work on a new area of the code base. How much benefit you’ll derive varies depending on that, but it’s never none – if nothing else, everybody forgets things they don’t do often, but also the process of writing the documentation can hugely help the author’s understanding.

Controversy level: Another case of “most people probably agree this is a good idea but don’t do it”. Unless you’ve got someone pushing for it a lot, documentation tends to be allowed to slide. I’ve never really seen this work at any of the company’s I’ve worked for.

Plan to always have more capacity than work

Nick Stenning made an excellent point on this recently: If your team is always working at full capacity then delays in responding to changes will sky rocket, even if they’re coming in at a rate you can handle.

As well as that, it tends to mean that maintenance tasks that can greatly improve your productivity will never get done – almost every project has a back log of things that are really annoying the developers that they’d like to fix at some point and never get around to. Downtime is an opportunity to work on that.

This doesn’t require some sort of formal 20% time arrangement, it just requires not trying to fit a quart into a pint pot. In particular, if you find you’ve scheduled more work than got done, that’s not a sign that you slightly over estimated the amount you could get done that’s a sign that you scheduled significantly too much work.

Estimated Cost: Quite expensive. Even if you don’t formally have 20% time, you’re probably still going to want to spend about 20% of your engineering capacity this way. It may also require significant experimentation to get your planning process good enough to stop overestimating your capabilities.

Estimated Benefit: You will be better able to respond to changes quickly and your team will almost certainly get more done than they were previously getting done in their 100% time.

Controversy level: Fairly high. Almost everywhere I’ve worked the team has consistently planned more work than they have capacity for.

Projects should be structured as collections of libraries

Modularity is somewhat overrated, but it’s not very overrated, and the best way of getting it is to structure things as libraries. The best way to organise your project is not as a big pot of code, but as a large number of small libraries with explicit dependencies between them.

This works really well, is easy to do, and helps keep things clean and easy to understand while providing push back against it all collapsing into a tangled mess.

There are systems like bazel that are specifically designed around structuring your project this way. I don’t have very fond memories of its origin system, and I’ve not used the open source version yet, but it is a good way of enforcing a good build structure. Otherwise the best way to do this is probably just to create subdirectories and use your language’s standard packing tools (which probably include a development mode for local development. e.g. pip install -e if you’re using Python).

Some people may be tempted to do this as microservices instead, which is a great way to get all the benefits of libraries alongside all the benefits of having an unreliable network and a complicated and fragile deployment system.  There are some good reasons to use microservices in some situations, but using them purely as a way to achieve modularity is just a bad idea.

Estimated cost: Quite low. Just start small – factor out bits of code and start new code in its own library. Evolve it over time.

Estimated benefit: Small to moderate productivity enhancement. Not likely to have a massive impact on quality, but it does make testing easier so it should have some.

Controversy level: Fairly low. I’m not sure people have an opinion on this one way or the other.

Ubiquitous working from home

I’m actually not a huge fan of massively distributed teams, mostly because of time zones. It tends to make late or early meetings a regular feature of peoples’ lives. I could pretend to be altruistic and claim that I disapprove of that because it’s bad for people with kids, which it is, but I also just really hate having to do those myself.

But the ability to work from home is absolutely essential to a productive work environment, for a number of reasons:

  1. Open plan offices are terrible. They are noisy distraction zones that make it impossible to get productive work done. Unfortunately, this battle is lost. For whatever reason the consensus is that it’s more cost effective to cram developers in at 50% efficiency than it is to pay for office space. This may even be true. But working from home generally solves this by giving people a better work environment that the company doesn’t have to pay for.
  2. Requiring physical presence is a great way for your work force to be constantly sick! People can and should take sick days, but if people cannot work from home then they will inevitably come in when they feel well enough to work but are nevertheless contagious. This will result in other people becoming ill, which will either result in them coming and spreading more disease or staying home and getting nothing done. Being able to work from home significantly reduces the incentive to come in while sick.
  3. Not having physical access to people will tend to improve your communication patterns to be lower interrupt and more documentation driven, which makes them work better for everyone both in the office and not.

I do not know what the ideal fraction of work from home to work in the office is, but I’d bet money that if most people are not spending at least two days per week working from home then they would benefit from spending more. Also, things will tend to work better as the fraction increases: If you only have a few people working from home at any given point, the office culture will tend to exclude them. As you shift to it being more normal, work patterns will adapt to accommodate them better.

Estimated cost: There may be some technical cost to set this up – e.g. you might need to start running a VPN – but otherwise fairly low. However there may be quite a lot of political and social push back on this one, so you’re going to need a fair bit of buy in to get it done.

Estimated benefit: Depends on team size and current environment, but potentially very large productivity increase.

Controversy level: Fairly low amongst developers, fairly high amongst the non-developers who you’ll probably need to get sign off on it.

No Long Working Hours

Working longer work weeks does not make for more productive employees, it just results in burning out, less effective work and spending more time in the office failing to get anything done. Don’t have work environments that encourage it.

In fact, it’s better if you don’t have work environments that allow it, because it will tend to result in environments where it goes from optional to implicitly mandatory due to who gets rewarded. It’s that reading managers’ minds thing again.

Estimated cost: Same as working from home: Low, but may require difficult to obtain buy in. Will probably also result in a transitional period of lower productivity while people are still burned out but less able to paper over it.

Estimated benefit: High productivity benefits, high quality benefits. Exhausted people do worse work.

Controversy level: High. Depending on who you’re talking to this is either obviously the correct thing to do or basically communism (there may also be some people who think it’s basically communism and that’s why they like it).

Good Work Culture

Or the “don’t work with jerks” rule.

People need to be able to ask questions without being afraid. People need to be able to give and receive feedback without being condescending or worrying that the other side will blow up at them or belittle them. People need to be able to take risks and be seen to fail without being afraid of how much it will cost them.

There are two major components to this:

  1. Everyone needs to be on board with it and work towards it. You don’t need everyone to be exquisitely polite with everyone else at all times – a certain amount of directness is usually quite beneficial – but you do need to be helpful, constructive and not make things personal.
  2. Some people are jerks and you should fire them if they don’t fix their ways.

It is really hard to do the second one, and most people don’t manage, but it’s also really important. Try to help them fix their ways first, but be prepared to let them go if you can’t, because if you’ve got a high performing jerk on the team it might be that they’re only or in large part high performing because they’re making everyone else perform worse. Even if they really are that good, they’re probably not good enough to justify the reduced productivity from everyone else.

Note: This includes not just developers but also everyone else in the company.

Estimated cost: High. Changing culture is hard. Firing people is hard, especially if they’re people who as individual performers might look like your best employees.

Estimated benefit: Depending on how bad things are currently, potentially extremely high. It will bring everyone’s productivity up and it will improve employee retention.

Controversy level: Another “Not controversial but people don’t actually do it”. I’ve mostly seen the end result of jerks leaving under their own volition and everyone breathing a sigh of relief and experiencing a productivity boost.

Good Skill Set Mixing

You generally want to avoid both silos and low bus factors.

In order to do that, it’s important to have both overlapping and complementary skills on your team: A good rule of thumb is that any task should have at least two people who can do it, and any two people should have a number of significant tasks where one would obviously be better suited to work on it than another. The former is much more important than the latter, but both are important.

Having overlapping skills is important because it increases your resilience and capacity significantly: If someone is out sick or on holiday you may be at reduced capacity but there’s nothing you can’t do. It also means there is always a second perspective you can get on any problem you’re stuck with.

Having complementary skills is important because that’s how you expand capabilities: Two people with overlapping skills are much better able to work together than two people with nothing in common, but two people with identical skills will not be much better working together than either of them individually. On the other hand two people working together who have different skill sets can cover the full range of either of their skills.

This is a hard one to achieve, but it will tend to develop over time if you’re documenting things well and doing code review. It’s also important to bear in mind while hiring.

Estimated cost: Hard to say because what you need to do to achieve it is so variable, but it will probably require you to hire more people than you otherwise would to get the redundant skill sets you need, so it’s not cheap.

Estimated benefit: Huge improvement in both total team and individual productivity.

Controversy level: Not exactly controversial, but tends not to happen in smaller companies due to people not seeing the benefit. Where it happens it tends to happen by accident.

Hire me to come take a look at what might be wrong

I do do software consulting after all. This isn’t what I normally consult on (I’m pretty focused on Hypothesis), but if you’d like the help I’d be happy to branch out, and after accidentally writing 5000 words on the subject I guess I clearly have a few things to say on the subject.

Drop me an email if you’re interested.

Estimated cost: My rates are very reasonable.

Estimated benefit: You’re probably better placed to answer this one than I am, but if this document sounds reasonable to you but you’re struggling to get it implemented or want some other things to try, probably quite high.

Controversy level: Not at all controversial. Everyone thinks this is a great idea and you should do it. Honest.

Published

in bookmarks

Tagged

© 2010 - 2024 Daniel Nitsikopoulos. All rights reserved.

🕸💍  →