Modifies the layer serving endpoint to be a generic blob-serving
endpoint that can handle both manifest and layer object "types".
Note that this commit does not yet populate the CAS with any
manifests.
Permission changes in the Travis CI Nix builders have caused this to
start failing, as the build user now has insufficient permissions to
use caches.
There may be a way to change the permissions instead, but in the
meantime we will just cause things to rebuild.
Channel serving has moved to a new subdomain, and the redirect
semantics have changed. Instead of serving temporary redirects,
permanent redirects are now issued.
I've reported this upstream as a bug, but this workaround will fix it
in the meantime.
This moves the pin from just being in the Travis configuration to also
being set in a nixpkgs-pin.nix file, which makes it trivial to build
at the right commit when performing local builds.
Previously, this was failing as follows:
```
these derivations will be built:
/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv
building '/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv'...
building
warning: GOPATH set to GOROOT (/nix/store/4859cp1v7zqcqh43jkqsayl4wrz3g6hp-go-1.13.4/share/go) has no effect
failed to initialize build cache at /homeless-shelter/.cache/go-build: mkdir /homeless-shelter: permission denied
builder for '/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv' failed with exit code 1
error: build of '/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv' failed
```
This gets rid of the package called "server" and instead moves
everything into the project root, such that Go actually builds us a
binary called `nixery`.
This is the first step towards factoring out CLI-based functionality
for Nixery.
The previous logic failed because single meta-packages such as
"nixery.dev/shell" would not end up removing the meta-package itself
from the list of packages passed to Nix, causing a build failure.
This was a regression introduced in 827468a.
Nixery itself is built with the buildLayeredImage system, which takes
some time to create large numbers of layers.
This adjusts the default number of image layers from 96 to 20.
Additionally Nixery's image is often loaded with `docker load -i`,
which ignores layer cache hits anyways.
Additionaly the CI build is configured to use only 1, which speeds up
CI runs.
Imports the package set twice in the builder expression: Once
configured for the target system, once configured for the native
system.
This makes it possible to fetch the actual image contents for the
required architecture, but use local tools to assemble the symlink
layer and metadata.
Specifying this meta-package toggles support for ARM64 images, for
example:
# Pull a default x86_64 image
docker pull nixery.dev/hello
# Pull an ARM64 image
docker pull nixery.dev/arm64/hello
Adds an implementation of popcount that, instead of realising
derivations locally, just queries the cache's narinfo files.
The downside of this is that calculating popularity for arbitrary Nix
package sets is not possible with this implementation. The upside is
that calculating the popularity for an entire Nix channel can now be
done in ~10 seconds[0].
This fixes#65.
[0]: Assuming a /fast/ internet connection.
This case should not be possible unless something manually constructs
a logrus entry with a non-error value in the log.ErrorKey field, but
it's better to be safe than sorry.
Previously background contexts where created where necessary (e.g. in
GCS interactions). Should I begin to use request timeouts or other
context-dependent things in the future, it's useful to have the actual
HTTP request context around.
This threads the request context through the application to all places
that need it.
The point at which files are moved happens to also (initially) be the
point where the `layers` directory is created. For this reason
renaming must ensure that all path components exist, which this commit
takes care of.
The filesystem storage backend can be enabled by setting
`NIXERY_STORAGE_BACKEND` to `filesystem` and `STORAGE_PATH` to a disk
location from which Nixery can serve files.
This abstracts over the functionality of Google Cloud Storage and
other potential underlying storage backends to make it possible to
replace these in Nixery.
The GCS backend is not yet reimplemented.
The JSON file generated for service account keys already contains the
required information for signing URLs in GCS, thus the environment
variables for toggling signing behaviour have been removed.
Signing is now enabled automatically in the presence of service
account credentials (i.e. `GOOGLE_APPLICATION_CREDENTIALS`).
Some Nix download mechanisms will add a second hash in the store path,
which had been added to the source hash output (breaking argument
interpolation).
Instead of compressing & decompressing again to get the underlying tar
hash, use a similar mechanism as for store path layers for the symlink
layer and only compress it once while uploading.
Docker expects hashes of compressed tarballs in the manifest (as these
are used to fetch from the content-addressable layer store), but for
some reason it expects hashes in the configuration layer to be of
uncompressed tarballs.
To achieve this an additional SHA256 hash is calculcated while
creating the layer tarballs, but before passing them to the gzip
writer.
In the current constellation the symlink layer is first compressed and
then decompressed again to calculate its hash. This can be refactored
in a future change.
This has become an issue recently with changes such as GZIP
compression, where CI runs no longer work because they conflict with
the production bucket for the public instance.
Makes use of the `.WithError` and `.WithField` convenience functions
in logrus to simplify log statement construction.
This has the added benefit of making it easier to correctly log
errors.
This rewrites all existing log statements into the structured logrus
format. For consistency, all errors are always logged separately from
the primary message in a field called `error`.
Only the "info", "error" and "warn" severities are used.
Uses a hash of Nixery's sources as the version displayed when Nixery
launches or logs an error. This makes it possible to distinguish
between errors logged from different versions.
The source hashes should be reproducible between different checkouts
of the same source tree.
This formatter has basic support for the Stackdriver Error Reporting
format, but several things are still lacking:
* the service version (preferably git commit?) needs to be included in
the server somehow
* log streams should be split between stdout/stderr as that is how
AppEngine (and several other GCP services?) seemingly differentiate
between info/error logs
These two packages almost always end up being required by programs,
but people don't necessarily consider them.
They will now always be added and their popularity is artificially
inflated to ensure they end up at the top of the layer list.
Image layers in manifests are now sorted in a stable (descending)
order based on their merge rating, meaning that layers more likely to
be shared between images come first.
The reason for this change is Docker's handling of image layers on
overlayfs2: Images are condensed into a single representation on disk
after downloading.
Due to this Docker will constantly redownload all layers that are
applied in a different order in different images (layer order matters
in imperatively created images), based on something it calls the
'ChainID'.
Sorting the layers this way raises the likelihood of a long chain of
matching layers at the beginning of an image.
This relates to #39.
Implements a local manifest cache that uses the temporary directory to
cache manifest builds.
This is necessary due to the size of manifests: Keeping them entirely
in-memory would quickly balloon the memory usage of Nixery, unless
some mechanism for cache eviction is implemented.
The functions used for layer creation are now easier to follow and
have clear points at which the layer cache is checked and populated.
This relates to #50.
MD5 hash checking is no longer performed by Nixery (it does not seem
to be necessary), hence the layer cache now only keeps the SHA256 hash
and size in the form of the manifest entry.
This makes it possible to restructure the builder code to perform
cache-fetching and cache-populating for layers in the same place.
The new builder now caches and reads cached manifests to/from GCS. The
in-memory cache is disabled, as manifests are no longer written to
local file and the caching of file paths does not work (unless we
reintroduce reading/writing from temp files as part of the local
cache).
This cache is no longer required as it is implicit because the layer
cache (mapping store path hashes to layer hashes) implies that a layer
has been seen.
Implements the new build process to the point where it can actually
construct and serve image manifests.
It is worth noting that this build process works even if the Nix
sandbox is enabled!
It is also worth nothing that none of the caching functionality that
the new build process enables (such as per-layer build caching) is
actually in use yet, hence running Nixery at this commit is prone to
doing more work than previously.
This relates to #50.
The new manifest package creates image manifests and their
configuration. This previously happened in Nix, but is now part of the
server's workload.
This relates to #50.
The new build process can now call out to Nix to create layers and
upload them to the bucket if necessary.
The layer cache is populated, but not yet used.
The state type contains things such as the bucket handle and Nixery's
configuration which need to be passed around in the builder.
This is only added for convenience.
This cache is going to be used for looking up whether a layer build
has taken place already (based on a hash of the layer contents).
See the caching section in the updated documentation for details.
Relates to #50.
This introduces a new Nix derivation that, given an attribute set of
layer hashes mapped to store paths, will create a layer tarball for
each of the store paths.
This is going to be used by the builder to create layers that are not
present in the cache.
Relates to #50.
Refactors the layer grouping package (which previously compiled to a
separate binary) to expose the layer grouping logic via a function
instead.
This is the next step towards creating layers inside of the server
component instead of in Nix.
Relates to #50.
Simplifies the wrapper script used to invoke Nix builds from Nixery to
just contain the essentials, since the layer grouping logic is moving
into the server itself.
This is the first step towards a more granular build process where
some of the build responsibility moves into the server component.
Rather than assembling all layers inside of Nix, it will only create
the symlink forest and return information about the runtime paths
required by the image.
The server is then responsible for grouping these paths into layers,
and assembling the layers themselves.
Relates to #50.
Fixes two launch script compatibility issues with other container
runtimes (such as gvisor):
* don't fail if /tmp already exists
* don't fail if the environment becomes unset
Instead of dumping all Nix output as one at the end of the build
process, stream it live as the lines come in.
This is a lot more useful for debugging stuff like where manifest
retrievals get stuck.
Caches manifests under `manifests/$cacheKey` in the GCS bucket and
introduces two-tiered retrieval of manifests from the caches (local
first, bucket second).
There is some cleanup to be done in this code, but the initial version
works.
Use the PackageSource.CacheKey function introduced in the previous
commit to determine the key at which a manifest should be cached in
the local cache.
Due to this change, manifests for moving target sources are no longer
cached and the recency threshold logic has been removed.