Felix Crux

Technology & Miscellanea

Tags: ,

Whether or not to regularly spend time and effort upgrading dependencies can be a contentious topic on development teams. Advocates argue that not doing the work allows tech debt and bitrot to accumulate, while opponents accuse them of chasing new-and-shiny novelties while ignoring what’s actually valuable to the product. Despite what feels like an unending amount of time spent on the churn of upgrades, security teams still struggle to get risky old dependencies patched, and developers complain about using deprecated tools.

After being burned several times by excruciatingly tedious forced upgrades of vulnerable or broken legacy codebases, I’ve come down firmly on the side of favouring frequent updates — with plenty of flexibility and some caveats.

Background

The basics of the argument are as follows: Most software projects are not entirely standalone pieces of original code. They depend on third-party (usually free/libre or open-source) libraries, frameworks, and tools — usually both as part of their functionality and as infrastructure for working in the codebase (think test libraries/runners, static analysis tools, and the like). They likely also interface with third-party components like the operating system, language runtime, databases, caches, queues, CI/CD pipelines, etc. All of these dependencies have their own release cadences and support windows (even if only informally, such as volunteer FLOSS authors being less willing to help with questions or bugs in older releases). As these components “change out from under” the primary project, development teams have to choose whether to allow their use to drift further and further behind the latest upstream versions, or whether to take time away from working on their primary mission/revenue-generating product to merely “keep up” with changes in the outside world that don’t necessarily inherently make anything noticeably better.

The downsides to spending time upgrading are obvious: It takes time and effort. It doesn’t contribute to the actual product you’re trying to build. It may introduce new bugs or subtle incompatibilities. You don’t need whatever new feature is being added, so it’s just extra bloat and attack surface. You’ll have to learn the new way of doing things — or worse: your users might have to learn. For an online service, some kinds of upgrades may even require downtime! Are proponents of frequent updates just magpies chasing the new and shiny, or incompetents trying to seem productive with trivial and ultimately meaningless busywork?

On the other hand, deferring updates means getting further and further out of sync with what your project would look like if you were starting afresh today — a reasonable approximate definition of “bitrot” (or one of the many overloaded meanings of “tech debt”). The tools you use will feel limiting and crude when you compare them to the state of the art. Documentation and examples will be harder and harder to find as more and more material online assumes the existence of more modern APIs. Bug reports or questions will be outright rejected, or at best rebuffed with a request to retry with an up-to-date version. Features you hoped to use will turn out to be unavailable. And, of course, there’s the ever-present risk of a security vulnerability being discovered, with no patch available or forthcoming.

Why I favour frequent updates

My experience has been that in most projects that aren’t completely dead, you’ll end up needing to upgrade everything eventually — either because you need support for a crucial new feature, or because you need to keep up with security updates. This is especially true if the code is in any way exposed to a network, or if you have security and compliance audit requirements. That means that, to a rough approximation, upgrading rarely doesn’t reduce the total workload — it just shifts it. Then again, upgrading frequently doesn’t reduce it either! The breaking changes must still be handled, the new interfaces learned, and so on. Updating often, however, means the work required to upgrade to a current supported version at any given moment is likely much lesser. In aggregate it’s the same total, but it’s in bite-sized chunks.

That’s valuable simply because it’s more predictable — more controllable. That means fewer surprises partway through projects when a team discovers they have to upgrade a critical dependency to get access to a key feature, blowing up their release timeline. It also means less scrambling when there’s a zero-day vulnerability and teams find they can’t just apply a minor patch, but have to go through six major releases to get back onto a supported version.

If upgrading regularly, you still have to get through those six major versions or whatever, but you do it at a time of your choosing, and can move faster or slower depending on what else you have going on, rather than being forced to do it all at once. Reducing the variance around the expected work isn’t as valuable as reducing the actual expected amount of work; but reducing the total work is not an option — so controlling the variance is the next best thing.

Concessions to practicality

None of this means you have to handle every update the moment it becomes available. A realistic implementation of this practice might be to check for and release small updates every couple of weeks, every month, or each quarter. If your routine check turns up something that requires more effort, you can use your usual planning process to get it scheduled for the not-too-distant future — it doesn’t have to be right away. If you’re in the middle of a really busy time or have a looming deadline, you can temporarily pause or defer more of the routine updates. The goal is to reduce unwelcome surprises, so just knowing that the work needs to be done is a benefit, even if you can’t complete it immediately.

Upgrading frequently also means teams will grow to value and invest in tools, practices, and processes that make updates less painful. It’s obvious that one gets better at what one does often — but the corollary is that we’ll be bad at things we do rarely. It may seem daunting to some teams to keep up with every component’s release schedule, but taking the leap to trying to do it anyway is likely to change just how daunting it is. Teams should invest in automation for testing, staged deployments, gradual rollouts, detection of problems, and rollbacks. All of these will make it more realistic to frequently update — and make the team better at shipping their primary product too.

Updating frequently doesn’t mean you have to chase every latest version of every component in the stack, or that you should install piping hot fresh tarballs straight out of the upstream release process. It’s perfectly sensible to have policies like letting things certain things simmer for a few months before adopting them, or to wait for a .1 release after each new major version, or to keep foundational components on long-term support releases only. The main thing is to have a regular cadence of updates — not necessarily to always be on the latest version. It’s just that a regular tempo of updates means you simply can’t get too far behind the latest.

Do what works for you

As with most practices, what’s “right” is highly contextual. There’s no universally-correct answer, just answers that make sense for your team and situation. The position I’ve taken above is based on a model of a team that has a rigorous automated test suite and a fast and easy way of automatically validating and rolling out updates. If you don’t have those, upgrades are much riskier, and your energy would be better spent on getting those fundamentals in place first.

The viability of frequent updates isn’t just impacted by your context, it also shapes it and impacts other practices: In particular, I hope that it leads teams to be more thoughtful in taking on new dependencies. New libraries or tools are not just “free code I don’t have to write”, but also an ongoing cost. Is the effort saved in the moment worth the long-term work? Would it be better to reimplement or “vendor in” just the small piece you need? Can you make do with the components you already have, even if it’s not a perfect fit? Functionality has a cost, whether you pay for it upfront by creating it yourself, or pay for it over time by wrangling updates and adapting to changing APIs… but it must be paid for. Teams that pretend it can be avoided are (consciously or not) trying to benefit in the short term at the expense of the long-term health of the project they are stewards of. Taking on dependencies and never updating them is asking for a free lunch, and there ain’t no such thing.


blog comments powered by Disqus