feat(ci): add integration tests to GitHub Actions, remove .travis.yaml
This copies the integration tests from `.travis.yaml` into a script,
documents the assumptions it makes, and wires it into GitHub Actions.
Contrary to the travis version, we don't use Nixery's GCS backend, as
handing out access to the bucket used, especially for PRs, needs to be
done carefully.
Adding back GCS to the integration test can be done at a later point,
either by using a mock server, or by only exposing the credentials for
master builds (and have the test script decide on whether
GOOGLE_APPLICATION_CREDENTIALS is set or not).
The previous travis version had some complicated post-mortem log
gathering - instead of doing this, we can just `docker run` nixery, but
fork it into the background with the shell - causing it to still be able
to log its output as it's running.
An additional `--rm` is appended, so the container gets cleaned up on
termination - this allows subsequent runs on non-CI infrastructure (like
developer laptops), without having to manually clean up containers.
Fixes #119.
2021-04-29 15:10:54 +02:00
|
|
|
#!/usr/bin/env bash
|
|
|
|
set -eou pipefail
|
|
|
|
|
|
|
|
# This integration test makes sure that the container image built
|
|
|
|
# for Nixery itself runs fine in Docker, and that images pulled
|
|
|
|
# from it work in Docker.
|
|
|
|
|
|
|
|
IMG=$(docker load -q -i "$(nix-build -A nixery-image)" | awk '{ print $3 }')
|
|
|
|
echo "Loaded Nixery image as ${IMG}"
|
|
|
|
|
|
|
|
# Run the built nixery docker image in the background, but keep printing its
|
|
|
|
# output as it occurs.
|
2021-06-18 21:08:05 +02:00
|
|
|
# We can't just mount a tmpfs to /var/cache/nixery, as tmpfs doesn't support
|
|
|
|
# user xattrs.
|
|
|
|
# So create a temporary directory in the current working directory, and hope
|
|
|
|
# it's backed by something supporting user xattrs.
|
|
|
|
# We'll notice it isn't if nixery starts complaining about not able to set
|
|
|
|
# xattrs anyway.
|
|
|
|
if [ -d var-cache-nixery ]; then rm -Rf var-cache-nixery; fi
|
|
|
|
mkdir var-cache-nixery
|
|
|
|
docker run --privileged --rm -p 8080:8080 --name nixery \
|
feat(ci): add integration tests to GitHub Actions, remove .travis.yaml
This copies the integration tests from `.travis.yaml` into a script,
documents the assumptions it makes, and wires it into GitHub Actions.
Contrary to the travis version, we don't use Nixery's GCS backend, as
handing out access to the bucket used, especially for PRs, needs to be
done carefully.
Adding back GCS to the integration test can be done at a later point,
either by using a mock server, or by only exposing the credentials for
master builds (and have the test script decide on whether
GOOGLE_APPLICATION_CREDENTIALS is set or not).
The previous travis version had some complicated post-mortem log
gathering - instead of doing this, we can just `docker run` nixery, but
fork it into the background with the shell - causing it to still be able
to log its output as it's running.
An additional `--rm` is appended, so the container gets cleaned up on
termination - this allows subsequent runs on non-CI infrastructure (like
developer laptops), without having to manually clean up containers.
Fixes #119.
2021-04-29 15:10:54 +02:00
|
|
|
-e PORT=8080 \
|
2021-06-18 21:08:05 +02:00
|
|
|
--mount "type=bind,source=${PWD}/var-cache-nixery,target=/var/cache/nixery" \
|
feat(ci): add integration tests to GitHub Actions, remove .travis.yaml
This copies the integration tests from `.travis.yaml` into a script,
documents the assumptions it makes, and wires it into GitHub Actions.
Contrary to the travis version, we don't use Nixery's GCS backend, as
handing out access to the bucket used, especially for PRs, needs to be
done carefully.
Adding back GCS to the integration test can be done at a later point,
either by using a mock server, or by only exposing the credentials for
master builds (and have the test script decide on whether
GOOGLE_APPLICATION_CREDENTIALS is set or not).
The previous travis version had some complicated post-mortem log
gathering - instead of doing this, we can just `docker run` nixery, but
fork it into the background with the shell - causing it to still be able
to log its output as it's running.
An additional `--rm` is appended, so the container gets cleaned up on
termination - this allows subsequent runs on non-CI infrastructure (like
developer laptops), without having to manually clean up containers.
Fixes #119.
2021-04-29 15:10:54 +02:00
|
|
|
-e NIXERY_CHANNEL=nixos-unstable \
|
|
|
|
-e NIXERY_STORAGE_BACKEND=filesystem \
|
|
|
|
-e STORAGE_PATH=/var/cache/nixery \
|
|
|
|
"${IMG}" &
|
|
|
|
|
|
|
|
# Give the container ~20 seconds to come up
|
|
|
|
set +e
|
|
|
|
attempts=0
|
|
|
|
echo -n "Waiting for Nixery to start ..."
|
|
|
|
until curl --fail --silent "http://localhost:8080/v2/"; do
|
|
|
|
[[ attempts -eq 30 ]] && echo "Nixery container failed to start!" && exit 1
|
|
|
|
((attempts++))
|
|
|
|
echo -n "."
|
|
|
|
sleep 1
|
|
|
|
done
|
|
|
|
set -e
|
|
|
|
|
|
|
|
# Pull and run an image of the current CPU architecture
|
|
|
|
case $(uname -m) in
|
|
|
|
x86_64)
|
|
|
|
docker run --rm localhost:8080/hello hello
|
|
|
|
;;
|
|
|
|
aarch64)
|
|
|
|
docker run --rm localhost:8080/arm64/hello hello
|
|
|
|
;;
|
|
|
|
esac
|
|
|
|
|
|
|
|
# Pull an image of the opposite CPU architecture (but without running it)
|
|
|
|
case $(uname -m) in
|
|
|
|
x86_64)
|
|
|
|
docker pull localhost:8080/arm64/hello
|
|
|
|
;;
|
|
|
|
aarch64)
|
|
|
|
docker pull localhost:8080/hello
|
|
|
|
;;
|
|
|
|
esac
|