This Time Self-Hosted
dark mode light mode Search

Code Reviews In The Workplace

I have touched upon code reviews practices in the past, as well as covering some of the requests I have previously listened to during code reviews, but I have so far avoided discussing directly how I engage with code reviews, both as an author and as a reviewer, which I think might be particularly interesting as a bridge between the corporate and Open Source worlds, which I have straddled for long enough.

I could spend some time discussing the way the code review workflows work in practice — both Google and Meta¹ have published articles covering the topic, so I wouldn’t find myself much restricted in terms of what I can or cannot say. But given that they have already published that much, I don’t think there’s much more there for me to rehash, particularly as I don’t need to write filler for this blog.

So instead, let me dispel two popular fictions: firstly, the idea that code review as used at a workplace are mean to uphold the principles of software engineering that are taught in academia, and secondly that code reviews as applied by Free Software projects are the same as code reviews applied at a big corporation.

It’s no surprise that I have no particular love for academia, particularly when it comes to interface with real-world professional software development, but while I can respect academic work for its own sake, I have relatively sour experience when it comes to reviewing code (and design) together with more academically minded colleagues. From the point of view of the purely academic strawman (to avoid singling out any particular colleague at any time in my career, and for the sake of making this post more easily readable), the most important objective is to insist that the code is correct in the terms of engineering practices — with little thought or care of how the code plays in the larger scheme of things. For example, they might insist on never accessing environment variables despite the underlying system using those to communicate parameters (and it being a de facto industry standard.)

Now, the opposite strawman is also not a pleasant one, of course – I wrote about the important trade-off that is technical debt – so there is no way to entirely isolate the important high-level objectives of software engineering from writing code professionally. You almost certainly keep an however low bar of best practices and code quality, not for the sake of the academic quality of the code, but because those influence the velocity and reliability of the code itself.

Looking then at what I expect the value of code reviews in a work environment to be, I would think primarily at the fact that this is a business process that has to add business value. A perfectly polished engineering marvel without business value is like a Juicero: impressive and unsustainable. For me, that means that when I review code, while I obviously try to keep the code quality high, and insist on proper testing, my focus is to assess whether the change I’m reviewing is actually achieving the objective it set to achieve, and whether it has been sufficiently tested and vetted.

While I did enjoy being part of the Python Readability program at Google, which focused almost entirely on the best practices of the language, I do limit myself on how much I insist on this similar baseline as part of my current daily work. Which does not mean I completely avoid it: keeping a baseline of practices is important, not just as a matter of code quality, but because the velocity of a team depends also on the consistency of their codebase. As a practical example, my current team has collected over time a number of Python tools and services developed at different times by different teams, which turned out to use quite different baselines both in terms of Python version they target, and internal conventions — investing in “raising the tide” across all of these so that they at least have the same set of APIs available meant you didn’t have to remember what the specific limitations of each of those tools were when making a change, or whether a specific Python feature would be usable.

As I wrote previously, this is where linter and (opinionated) formatters are a huge advantage. When the tools already enforce a minimum baseline on the code, neither authors nor reviewers need to spend time with those back-and-forth. But particularly at the time these tools are introduced, they can lead to disruptions, if the discontinuity they bring is not managed.

The most obvious example of this is introducing formatters, particularly for developers of open source projects that have seen this happen before: running a formatter creates a literal discontinuity in the revision history of code, which breaks workflows depending on the ability to port patches between release branches, as well as using the so-called blame layer to identify who introduced which change. Up to this day, I’m not aware of any generally available SCM that can “see through” the formatting changes without incorrectly attributing the code’s authorship. As such, I have rarely seen autoformatting being applied tout-court to a large codebase even in closed repositories.

Unfortunately the alternative (formatting a file contextually with making changes to it) make for a miserable review experience. My personal choice for this is to bite the bullet, and reformatting a codebase the very moment I take responsibility (or, depending on the corporate expectation, the ownership) of it. This ensures that any follow-up change contains only the intentional changes, and can be reviewed easily — and the disturbance to the blame layer has a lot less impact when it effectively points at “this is code that was inherited from a previously-unmaintained state.”

Linters have a similar problem: if you introduce linters without first making sure the codebase is not triggering them, you end up in a corner when a developer authors a change that triggers said linters. Should the author of the change be required to address pre-existing linter warnings? Particularly when they are not affected by the change itself?

Large organizations tend to build tooling to address this. My employer has described auto-fixing linters before, and even when I was at Google I executed large scale changes before introducing new linter warnings, to make sure they wouldn’t fire on already existing code. This tension is quite hard to resolve — personally I get annoyed at lint warnings and sometimes just go ahead and sent a review request to fix those simply because I was looking at the code, but at the same time I do not expect others to do the same, as the business value of those changes is (usually) negligible — usually, because sometimes those lints can actually hide an actual implementation mistake, and addressing those may be fixing a rarely hit bug.

What does bring business value in a code review instead? Personally, I try to pay attention to interfaces and communication points: ensuring that APIs (public and not) are not ambiguous, and that they are resilient at what are the most likely error that consumers of the interface would be prone to make. Other reviewers might specialize in performance, testing, or documentation — sometimes the choice of reviewer is influenced in what you need to verify rather than what the change is really about.

This brings to the other large difference between reviews as executed in open source projects and in a business setting. Most of the time, tough not always, open source code reviews are driven by the intention from the author to get their code merged upstream, making it possible to push back more on the author to address the changes requested by the reviewers — particularly when the reviewers don’t have particular incentive to merge said code. Obviously nothing is so clear cut, but in projects like Linux, Python or LLVM, ideally the submitter has a heavier interest in getting their code merged than the receiving project has to accept the contribution — this is particularly the case when the submission is to add support for things like hardware or architectures whose adoption depend son the availability of a compiler, a kernel, or an optimised programming language runtime.

When reviewing code within the monorepo of a business, instead, the trade-off changes drastically. Both author and reviewer should have the same incentive for the change to be merged, as the change should only be to increase the business value (fixing issues, implementing new features, and so on), which means that pushing back too hard on landing a change is not useful for any of the participants. Which is why you cannot use the same style of review — refusing a change because “you don’t like it this way” is, or should be, at least frown upon – it is okay to refuse a specific implementation direction if the submitter is not part of the team that will maintain the codebase long term – while push back reasoning can be take a lot longer to form, as a reviewer going into digging in the actual implementation of an interface being used is still providing value, particularly if said interface is unclear.

A final difference between the two types of reviews is what I will define as seniority bias. In open source projects, you don’t generally make as much of a distinction between reviewing code coming from a long-time contributor or a newcomer. Rather most likely, you may “go easy” on a first-time contributor as they may not know how the project runs, and what is and isn’t important in a review. In a business settings, it’s actually the other way around.

When I’m reviewing changes coming from a colleague who’s more senior and tenured, I’m likely going to be trusting their judgement on whether the API they are using is appropriate, and whether they have thought of alternative approaches, particularly for complicated changes. This is in part consideration for the fact that the time of senior engineers is more valuable – in pure monetary terms – and in part due to the already mentioned incentive structure: if a change introduced by a senior engineer causes a problem down the line, it’s usually the case that they will pick up the pieces. Unlike in open source projects, where contributors may “drive by” and move on after their change is merged upstream, the relationships within a healthy business have a stronger permanence.

On the other hand, when reviewing code from newcomers, I feel a responsibility not just to check for not-so-obvious pitfalls that domain specific knowledge but also to be as thorough in my reviews as I can, without being annoying, so that the next time they send me a change, I can trust them to have already covered what we have previously discussed. Once again, this is a matter of incentives — as a senior engineer, it is my responsibility to train and elevate new colleagues, and the time I spend on doing so is factored in my role.

So, if I want to summarise how I approach reviews in the workplace:

  • Focus on the objective of a change, more than the form: ugly code can hurt one’s sense of software engineering, but in my experience the real problems are in ambiguous interface rather than abstraction violation.
  • Keep nits and improvement suggestions as follow-ups: if your review system does not provide an easy way to tag a suggestion as something to follow-up on, you may want to consider adding that — this is particularly useful to me to notch rakes, particularly if I keep seeing a persistent pattern of mistakes among different authors.
  • Trust your peers if you don’t have reason not to: this is easier in a business environment where you can (roughly) assume everyone is pushing towards the same goals, but that does not mean rubber-stamping everything you receive — obvious logic mistakes or dubious requirements are valid reasons to push back, stylistic debates are not a vlaid use of one’s time.
  • Make reviews into teachable moments: if you’re reviewing code from new members of your team (whether they are junior or senior), pay attention to what the common mistakes, and consider addressing them somewhere — it can be documentation, an API change to remove ambiguity, or a new linter to avoid making the same mistake again.
  • Explain your change requests: this is an important one, too often I see people just suggesting an alternative approach to do the same thing, without explaining the reasons why, and that won’t stick to the author’s mind for the next time they make a similar change. In particular, in the past I kept a running document with explanations of common requests I had, inspired by a “nitpick list” of a more senior colleague — this allowed colleagues who just started working on a project/language to check for known pitfalls before even sending their change out for review.

It might now be more visible, if not obvious, that sometimes the ideas of code reviews in open source and at work are quite at odd with each other, which in turn explains why the tools of the trade are so different between the two worlds.

¹ Some people have complained about the fact I refer to my current employer as Meta, while not referring to my previous one as Alphabet. To be entirely clear, I was never employed by Alphabet — my employment has always bene with a subsidiary of Google LLC (Ireland and UK.) It is part of my view on professionalism to be intentional in how I reference my employers.

Comments 1

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.