Maintaining npm packages is hard, and writing Open Source is already an achievement. But let’s not get complacent. Here I tried to compile a list of a few ideas to improve further. I haven’t implemented some of them yet. Few entries at the bottom are so novel that I think nobody in the world has either.
In readme, put install
and import
Some npm packages still use default exports which is not a best practice:
import callMeAnyWayYouLike from "somePackage";
The named exports should be used:
import { specialFunction } from "somePackage";
Also, the defaults-exporting packages tend to omit the import
/require
code line from readme’s (since you can name the import any way you want), which drives newbies and diligent people already accustomed to named imports nuts.
If you haven’t already, rebase your apps to use named exports and put the import
line example in its readme. The import
is ES modules syntax, which leads to the next point…
Take care of ESM
If your program is not pure ESM
yet, consider the rebasing and implementing it. Now is a good time; at the moment of writing, Node 12 is already deprecated, and it was the first Node version to support ESM
.
If it is pure ESM
, then:
- Declare that in the readme.
- If you had older non-ESM versions, mention that. Maybe somebody can use them in the meantime. Don’t lose users.
- If you see traffic to those non-ESM versions, consider back-porting new features to the last pre-ESM major version — take care of your users.
- Delete the
main
,module
, andbrowser
keys in package.json and useexports
only. Don’t be mistaken — there’s no “improved backwards compatibility” from having those keys. - List all builds in
exports
properly, for example:
{
"exports": {
"script": "./dist/detergent.umd.js",
"default": "./dist/detergent.esm.js"
}
}
Nail the transpile settings
Essentially, there are three aspects of how any npm package is built, and a consumer should be aware of them:
- What format is it? Options:
ESM
,CJS
,UMD
orIIFE
. At the time of writing, codsen packages ship onlyESM
andIIFE
. Therollup
could doUMD
, but it’s slow. Our preferredesbuild
can do onlyIIFE
, notUMD
. - Is the particular build transpiled? If so, to what? We used to transpile to ES5 but not any more. At the time of writing codsen package
ESM
builds are usingesbuild
targetesnext
, while theIIFE
builds are using a targetchrome58
. - Is the particular build bundled? That is, are all its dependencies included in the shipped program’s “lump” of code? The
ESM
andCJS
are typically not bundled, theIIFE
andUMD
typically are.1
Also, it’s good to think further about how will real users use the program. Do you remember how Create React App failed if you installed raw, untranspiled ES6 dependencies? That went on for two years, from the CRA inception until the autumn of 2018. Thousands of people’s hours were wasted as a result.
1 Also webpage-script ESM builds exist, which muddles up the water.
Nail the engines
As you know, package.json
can enforce requirements to the user’s platform:
{
"engines": {
"node": "^12.20.0 || ^14.13.1 || >=16.0.0"
}
}
But those ranges need to be proven on a CI. I’m guilty of this myself. It’s a feat to test the whole monorepo; I worked hard to reduce the build times — rollup + babel used to take between 1 and 2 hours, which I reduced to ~20 minutes. But that’s just one Node version, currently 16. If let’s say I add 14 and 18, I get back to where I started; CI builds that last for an hour at least.
What’s worse, introducing turborepo
, which runs on GoLang, blew the Semaphore CI Open Source account 4GB memory limits, so I had to upgrade to paid, top-tier 16GB plan. For example, May 2022 cost us $3.83, and April $8.03. So, triple that to test three Node versions. To hell, quadruple, let’s test Node 12 too (or do a major release bump and raise the engines
on every single package)!
If you wonder how Semaphore compares to GitHub Actions, Semaphore is cheaper.
Prove all the builds work
Not everybody ships out browser script builds, UMD
or IIFE
, of their packages. But for those who do, it’s worth testing them.
Few things can go wrong in browser-script builds, mainly coming from the script specifics: globals, bundling and transpiling:
- since
UMD
/IIFE
works by assigning your exported function to a global variable, that variable can theoretically clash with one of the named exports, rendering the program unusable, but here only. TheESM
build theoretically would be unaffected. A sneaky bug. - theoretically, broken transpiling would produce a broken program
- theoretically, wrong transpiling settings would mean your script build would not work on the browsers intended — practically, meaning you broke your promises (if you were stating what your program builds support in the first place, of course)
- same with bundling — theoretically, either a program or its dependencies could be broken during bundling. Especially when you think about legacy packages and interop between pure-ESM and
main
/exports
, in light of the bouquet of newfangled bundlers and babel plugins and their configuration.
I’m not testing my IIFE
builds (yet) because it’s impossible to import them in unit tests, that’s a whole next level of complexity. Which brings us to the next one…
Also, before pure-ESM days, when we had main
and module
exports in
package.json, we also used to make CJS
, CommonJS builds, which we’re
using require
and module.exports
and were transpiled to ES5 but
dependencies were not bundled. The idea was that progressive bundlers would
pick up the build from dist/
that module
export points to, gaining
tree-shaking etc., because of import
syntax. Decrepit bundlers would, in
turn, consume main
, which serves CJS
, transpiled to
ES5. Since the pure-ESM
transition, we stopped building CJS
. For the record, CJS builds can be imported
into unit tests fine.
Synergy of e2e browser script testbed HTMLs
Let’s say we have an npm package, which is an exported function. As diligent citizens, we build and ship a script version of it for web browsers.
We start writing tiny e2e tests in cypress
, tapping that script build. For example, we put <input>
’s in tiny HTML, and while we programmatically type into the first <input>
, we assert what comes out in the second one.
Notice, that’s a mini HTML web page with your npm package, all operational right there! What if we reused those mini HTML web pages? What if we used them for:
- as README examples, how to wire up script builds
- acting as GUI’s for casual users and for your testing and debugging
- becoming programmatically created, ever-green yeoman scaffolds
- a memory leaks detection testbed (run it 24/7, see what happens)
- they can become a full-blown GUI front-ends of the packages, a proper webpages
Another idea. Adhering to DRY, can we reuse unit tests (written for uvu
let’s say) in cypress
? Because I don’t know about you, I’m not going to rewrite all the same thing but in cypress format. For the record, there are around one million assertions performed across all our npm package unit tests in our monorepo (that’s counting programmatically-generated asserts)!
Plus, think about the coverage. You do need 100% coverage for those browser script build e2e’s. A single test would only prove “the program is not broken”, not that it’s “100% correct in a given coverage”.
For the record, mangled transpiling might not necessarily break the program completely; for example, a bad setting in Babel config might omit a spread operator here and there, changing the algorithm completely, and on UMD
/IIFE
build only.
Local testing setups on personal computers
It was a revelation for me personally: what is otherwise an overkill for a CI (slowing down daily releases and increasing the CI bill) is still feasible on your computer, and it can be done ad-hoc for free and in the background.
For example, you can test your npm package on all Node versions. The most primitive way would be to take a Node versions list from node.green, 12.4.0
, 12.8.1
and onwards — then chain the Node version manager calls, for example n
(or nvm
):
n 12.4.0 && yarn build && yarn test && n 12.8.1 && yarn build && yarn test && ...
OK. That’s unit testing. Can we do heavier scripting, to run unlikely cases? For example, can we create tests which prove that we haven’t been “pwned”, yet?
Do you remember codecov breach? It could have been tackled in minutes, not in 15 days if they had an automated e2e script which emulated the user: ran the install bash script, downloaded the program and most importantly, compared the received files’ hashes versus server’s.
But it’s us talking from the future. You don’t know what and where can happen, so The Crazy Test Suite™ will likely be broad, slow, and cost time and money. That’s why I’m suggesting running it from owned computers as opposed to from a CI.
Extra req’s for monorepo
Since npm will resolve from parent node_modules
folders, in monorepo, any missing dependencies in the program’s package.json
may not cause a CI to fail if they are present in the root package.json
.
That’s a liability. It has happened to us before.
Theoretically, in CI, each package should be tested in “a sandbox” — for example, spawn one CircleCI or SemaphoreCI “job” per package, manually assemble node_modules
querying external package via pacote
and copy-pasting sibling internal monorepo packages.
Nobody does this, of course. And the CI bill would surely increase from such équilibrisme.
Takeaway
Take it even more seriously.
Script and automate the hell out of it.
Also, belt and braces.