These are some of the links from all over the Web that I enjoyed the most this month:
Marc’s Blog - Erasure Coding versus Tail Latency. This is a very clever trick. I was grinning by the time I finished reading about it. I’m so used to erasure coding being associated with error correction (and perhaps secret-sharing) that using it for something else feels like a whole new invention. I don’t know if I’ll ever have a need for this technique, but I’m really pleased to have it in my bag of tricks.
JWZ.org - Mozilla.org’s 25th anniversary. A great set of links and stories from an epochal event (I particularly like They Live and the secret history of the Mozilla logo).
Vadim Kravcenko - Things they didn’t teach you about Software Engineering. This is good advice.
Irrational Exuberance - What does it mean to be a cost center? Offers an alternative definition: “A cost center is a function that is operated by optimizing its existing components” and explains the danger of being too inward-looking. I think this applies to team-level work as well as the whole department, and not just when working with non-engineers.
Idle Words - Why Not Mars. Being a certifiable rocket-ships-and-spacefarers fan wasn’t enough to keep this from being sadly convincing. At least it’s funny.
Matt Welsh - Hey, let’s fire all the devs and replace them with AI! This made me substantially more pessimistic about the impact of large language models (like ChatGPT) on the software development profession in particular (though still only in the long term).
By and large I’ve felt the advances in machine learning have been overhyped and will not in the near term have the Earth-shattering impacts many fear/look forward to. The models generate convincing-looking but ultimately completely vacuuous text that is superficially plausible but devoid of understanding and analysis. BS, in other words. They can’t analyze a novel problem and tailor a uniquely targeted solution to it, only regurgitate generic boilerplate typical solutions. So, for something that has real-world consequences and you only get one shot at, like spending a lot of money filling a Superbowl ad slot, you’ll still want humans.
However, software is uniquely capable of existing entirely in the digital realm. If an AI spits out a bunch of code that misses a fundamental part of the problem you’re trying to solve… that’s OK, because you can test the output on computers without incurring substantial real-world costs, find out what you missed, and try again (the computer won’t get bored or tired, after all). So the argument Matt Welsh makes is that it doesn’t matter that the AI doesn’t “understand” the problem and is certain to get the wrong answer over and over again — because ultimately it is many orders of magnitude cheaper to keep correcting it and iterating closer to a “good enough” solution than it is to have humans do it right in the first place.
In the long run, the thing that triumphs is often not what is best, but what has the best survival characteristics. Worse is better.
Baldur Bjarnason - Theory-building and why employee churn is lethal to software companies. The value of a software project is the accumulated knowledge and understanding of how to address the problem — not the typed code. Technically from last year, but it’s new to me. See also How to Build Good Software: “Software is about developing knowledge more than writing code”.