Creating a Jest Container Using Bazel
Anthony OleinikTL;DR: Here’s a gist for a BUILD.bazel that runs jest tests in a container or as a bazel test target.
Often, I run into a task that I scope prior to starting at around about N hours of work that end up taking roughly N days, or an unit-order of magnitude more than I intended. Often, a task holds some edge case or constraint that I did not take into account for: perhaps there are additional components I didn’t scope in, or some constraints make it much more difficult.
The task at hand? Build a container that runs jest tests. If I were to be using a Dockerfile, this is essentially what we are trying to reconstruct in bazel:
FROM node:18-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy the tests
COPY . .
# This should be a layer; that is, we shouldn't run this every time we
# run the tests
RUN npm ci --production
CMD ["npm", "test"]
While this was trivial with a Dockerfile, reproducing the image over and over again may be difficult. Futher, the docker images must be stored out of band: built with some sort of cli command and pushed elsewhere. If we use Bazel, we can use our build system to re-build the image the same exact way, every time, and well integrated into a single bazel run <some registry push target>
.
The Why
Lets first answer the “why”, because it does seem like a strange thing to do: Google’s excellent cloud deploy allows you to specify a container that is ran to “verify” your deployment. We could run some simple script that runs a series of curl
commands et. al, but the very nice UI/UX that something like Jest affords out of the box is not to be undervalued: determining exactly which tests failed, why they failed, as well as being able to use a robust assertion library are all useful tools that both reduce the friction of writing the tests and reduce the friction of debugging them.
People do what is easy, and if its not easy to write tests, well…
Where Does The Complexity Come From?
As a person who spends far too long with nix
, the task estimation seemed easy. Create a derivation that produces a docker image that has ran some 2nix
to install the npm modules. The docker image then runs npm test
.
No.
First, bazel
is “less complex” than nix in the sense that the end user API to a bunch of the -rules
packages are well-optimized for their use cases. Once you begin to stray off the beaten path, you end up contorting the API very hard to try to fit their square peg into your round hole.
The constraints of this are as follows:
- I want to be able to run the tests in three separate ways: Locally, using
bazel test
, so I am able to iterate on the tests without having to re-build a container every time I make a change to the tests. Locally in a container, so I can test the build in a container, as well as the table stakes of running inside a container in cloud deploy. - It would be useful if I was able to provide
jest
flags to run locally, so I don’t have to run the entire test suite if I make a single test change. - The tests should be parameterized to be able to run in different environments. This turned out to be a bit more of a jest problem than a Bazel one, but it was a source of complexity nonetheless.
One source of complexity here is that rules_js
has taken the stance that they will only support pnpm
as a package manager. That is all well and good, but pnpm
symlinks transitive dependencies, which makes things like executing node ./npm_packages/.bin/jest
a major pain, since node
will not resolve the transitive dependencies that way.
What Did the Solution Look Like?
I’ll spare the battle that I had with the Bazel rules, but it suffices to say it was quite the epic/s story: at one point I had written my own Bazel rule that ran npm install
, completely circumnavigating rules_js
. After a moment of sobering thought, I realized the complexity of such a monster that I had created was far too high to let live, and I continued to try to find a more optimal solution.
let’s start off with the easiest bit: a quick dump of the MODULE.bazel
:
bazel_dep(name = "aspect_rules_ts", version = "3.0.0-rc1")
bazel_dep(name = "aspect_rules_js", version = "2.0.0-rc5")
bazel_dep(name = "rules_nodejs", version = "6.2.0")
node = use_extension("@rules_nodejs//nodejs:extensions.bzl", "node", dev_dependency = True)
node.toolchain(node_version = "20.10.0")
use_repo(node, "nodejs_toolchains")
rules_ts_ext = use_extension("@aspect_rules_ts//ts:extensions.bzl", "ext", dev_dependency = True)
rules_ts_ext.deps(ts_version_from = "//integration_tests:package.json")
use_repo(rules_ts_ext, "npm_typescript")
npm = use_extension("@aspect_rules_js//npm:extensions.bzl", "npm", dev_dependency = True)
npm.npm_translate_lock(
name = "integration_npm",
pnpm_lock = "//integration_tests:pnpm-lock.yaml",
npmrc = "//integration_tests:.npmrc",
)
use_repo(npm, "integration_npm")
bazel_dep(name = "aspect_rules_jest", version = "0.22.0-rc0")
Some quick N.B.’s here:
- You are able to generate a
pnpm-lock.yaml
from anpm-lock.json
, but I chose not to, because having two package managers is confusing; regardless, all dependencies need to be added to thepnpm
anyway, so I chose to use that overnpm
entirely. - You’ll note that the generated
npm
packages repo is calledintegration_npm
; I wanted to leave the door open in the future to creating new node-based subtracts, so I decided to “scope” the name of the packages so that not many changes have to be made in the future if more gets added here.
You’ll also note that the is Typescipt hidden amongst the weeds! That’s where we’ll start our BUILD.bazel
.
# Just gonna dump all the dependencies here so that I don't have to keep coming back up here
load("@aspect_rules_ts//ts:defs.bzl", "ts_project")
load("@integration_npm//:defs.bzl", "npm_link_all_packages")
load("@aspect_rules_js//js:defs.bzl", "js_binary", "js_image_layer")
load("@rules_oci//oci:defs.bzl", "oci_image")
load("@aspect_rules_jest//jest:defs.bzl", "jest_test")
load("@aspect_bazel_lib//lib:copy_file.bzl", "copy_file")
npm_link_all_packages(name = "node_modules")
ts_project(
name = "integration_test_js",
srcs = glob(["src/**/*.ts"]),
deps = [":node_modules"],
)
This rule takes all of the typescript sources in the src
folder and converts it to a JavaScript source. That is, if I had a file called src/some.test.ts
, referring to :integration_test_js/src/some.test.js
would work.
Next, lets sidestep running this outside the container and instead focus on running inside the container.
the js_binary
rule from rules_js
essentially runs node <your path>
on a script. This is useful if you’re writing some app in JS, since typically you have some main.js
where you start your server, but this isn’t the case for us; instead, we want to run node_modules/.bin/jest
. We could point js_binary
to that path, but I know that this leads to a lot of pain, since dependency resolution is all gobbledygook from inside the node_modules
folder.
Instead, lets do what js_binary
intended and run a script that can live outside node_modules
:
const jest = require("jest");
const path = require("path");
jest.runCLI({}, [path.resolve(__dirname, '.')]);
Very simple script that I will dub driver.js
: it runs jest from its non-public library API; there seems to be some level of push to expose a real public API, but there also seems a lack of steam behind the proposal, so we do what we must. Here is that binary rule all in place:
# bazel run this will NOT work; bazel does not
# understand how to decypher the symlinks here.
js_binary(
name = "test_binary",
chdir = package_name(),
data = [
# This is the js that was generated from our TS.
":integration_test_js",
"//integration_tests:node_modules",
"driver.js"
],
# make this private so no one thinks this is a runnable target;
# it will not work outside of the container.
visibility = ["//visibility:private"],
entry_point = "driver.js"
)
Now that we have a sensible binary, lets follow the example API for the rest of the owl:
js_image_layer(
name = "test_layer",
binary = ":test_binary",
root = "/tests"
)
# This is `/[js_image_layer 'root']/[package name of js_image_layer 'binary' target]/[name of js_image_layer 'binary' target]`
CMD = "/tests/integration_tests/test_binary"
oci_image(
name = "integration_test_image",
# do not use distroless! we need an image with bash.
base = "@ubuntu_linux_amd64",
cmd = [CMD],
entrypoint = ["bash"],
tars = [
":test_layer",
],
visibility = ["//visibility:public"],
workdir = select({
"@aspect_bazel_lib//lib:bzlmod": CMD + ".runfiles/_main",
"//conditions:default": CMD + ".runfiles/__main__",
}),
)
This image won’t work quite yet, though: we haven’t told jest where the config file for it can be found. We want that config file to be slightly magical; it’s going to hold our per-env configuration. Jest has the ability to setup global constants; we want to use that to our advantage so that we can easily refer to our configuration. Our tests, in the end, will look like this:
// So clean!
test('tests should take input paramaters', () => {
expect(INTEGRATION_TEST_CONFIG.passSmokeTest)
});
Lets first get typescript to accept the fact that we are adding global constants. Add a decs.d.ts
to the root of the project:
export {}
type IntegrationTestConfig = {
// if we should pass the inital smoke test. useful
// to double check that we are getting configuration
// paramaters correctly.
passSmokeTest: boolean,
}
declare global {
var INTEGRATION_TEST_CONFIG: IntegrationTestConfig
}
the tsconfig.json
will use that automatically, but Bazel needs to be told that any TS that refers to that is valid:
# amend the prevous ts_project
ts_project(
name = "integration_test_js",
srcs = glob(["src/**/*.ts"]) + [
"decs.d.ts"
],
deps = [":node_modules"],
)
Lets teach Bazel how to choose which jest
configuration file to use:
# These variables can typically live somewhere else, like in a `bazel/BUILD.bazel` file,
# since other parts of your build may care if you're building for local or remote.
string_flag(
name = "environment",
build_setting_default = "local",
)
config_setting(
name = "local",
flag_values = {
":environment": "local",
},
)
config_setting(
name = "staging",
flag_values = {
":environment": "staging",
},
)
# The jest configuration file for the docker image to pick up.
copy_file(
name = "runtime_config",
src = select({
":local": "configs/jest.local.config.js",
":staging": "configs/jest.staging.config.js",
# default to the local registry.
":default": "configs/jest.local.config.js",
}),
out = "jest.config.js",
)
This is a rule from skylib
that just moves a file; in this case, we’re moving a file based on what flag a user passed in, e.g. bazel build //integration_tests:runtime_config --//integration_tests:environment=local
. I’ll populate one of those files, but you can have as many as you have environments. In a folder called config
, under jest.staging.config.js
:
/** @type {IntegrationTestConfig} */
const config = {
passSmokeTest: true,
}
/** @type {import('jest').Config} */
const jestConfig = {
verbose: true,
globals: {
INTEGRATION_TEST_CONFIG: config
}
};
module.exports = jestConfig;
Almost there! The last part is to have a bazel test
target. This is what we’re going to use to run the tests locally; we’re going to hard-code the local config here, because the only way this target should ever be executed is locally.
jest_test(
name = "tests",
# no need to copy, either: we just pass the real path to the config.
config = "configs/jest.local.config.js",
data = [
":integration_test_js",
],
node_modules = ":node_modules",
)
And that’s that! We’ve now assembled a container that runs a jest
test suite for us.
We can do the following:
bazel test //integration_tests:tests -- some-jest-flags
, which will run the jest tests.bazel build //integration_tests:integration_test_image
, which will build us a image with the local configs in place.bazel build //integration_tests:integration_test_image --//integration_tests:environment=staging
, which will build a container that has the staging environment configurations in place.