Quick context: the author of a popular JavaScript package colors updated the package with malicious code that would spin in an infinite loop. This update was picked up globally because of how JavaScript package management works, and blocked deploys until a fix could be applied. For companies that don't test before deploy, this took down production.
The HN discussion about this article is extensive, much if it focusing on whether what happened with colors is really an "attack" or not. While that question is interesting, the article itself has some really good insights about package manager behavior and how it affects the overall ecosystem. I've worked on software build infrastructure of a few years, and it's a very closely related problem to package management. The key insight that Russ highlighted for me was that the way a package manager resolves dependencies has second-order effects on the how resilient the overall language ecosystem is to errors, whether intentional or not.
In particular, Russ' distinction between a high-fidelity build and a low-fidelity build seems extremely useful to me, and I hadn't run across it before. In short, high-fidelity builds resolve dependencies by using the latest transitive dependencies that direct dependencies have already tested with. Low-fidelity builds don't follow this pattern, and therefore suffer when new versions of packages appear that are broken and/or incompatible. Russ makes several other points around this, so the whole post is worth a read, but I wanted to highlight this aspect that was both new to me and useful. I will specifically look for this trait in package managers I evaluate, as it would have saved me a lot of pain in previous JavaScript and Python projects!
What NPM Should Do Today To Stop A New Colors Attack Tomorrow