Caches manifests under `manifests/$cacheKey` in the GCS bucket and
introduces two-tiered retrieval of manifests from the caches (local
first, bucket second).
There is some cleanup to be done in this code, but the initial version
works.
Use the PackageSource.CacheKey function introduced in the previous
commit to determine the key at which a manifest should be cached in
the local cache.
Due to this change, manifests for moving target sources are no longer
cached and the recency threshold logic has been removed.
Introduces three new types representing each of the possible package
sources and moves the logic for specifying the package source to the
server.
Concrete changes:
* Determining whether a specified git reference is a commit vs. a
branch/tag is now done in the server, and is done more precisely by
using a regular expression.
* Package sources now have a new `CacheKey` function which can be used
to retrieve a key under which a build manifest can be cached *if*
the package source is not a moving target (i.e. a full git commit
hash of either nixpkgs or a private repository).
This function is not yet used.
* Users *must* now specify a package source, Nixery no longer defaults
to anything and will fail to launch if no source is configured.
Adds a NIX_TIMEOUT environment variable which can be set to a number
of seconds that is the maximum allowed time each Nix builder can run.
By default this is set to 60 seconds, which should be plenty for most
use-cases as Nixery is not expected to be performing builds of
uncached binaries in most production cases.
Currently the errors Nix throws on a build timeout are not separated
from other types of errors, meaning that users will see a generic 500
server error in case of a timeout.
This fixes#47
Moves the relevant parts of the customisation layer construction from
dockerTools.mkCustomisationLayer into the Nixery code base.
The version in dockerTools builds additional files (including via
hashing of potentially large files) which are not required when
serving an image over the registry protocol.
This package is used by a variety of programs that users may want to
embed into Nixery in addition, for example cachix, but those packages
don't refer to it explicitly.
This is required by git in cases where Nixery is configured with a
custom git repository.
I've also added a shell back into the image to make debugging a
running Nixery easier. It turns out some of the dependencies already
pull in bash anyways, so this is just surfacing it to $PATH.
Before this change, Nixery would pass on the image name unmodified to
Nix which would lead it to cache-bust the manifest and configuration
layers for images that are content-identical but have different
package ordering.
This fixes#38.
This test, after performing the usual Nixery build, loads the built
image into Docker, runs it, pulls an image from Nixery and runs that
image.
To make this work, there is some configuration on the Travis side.
Most importantly, the following environment variables have special
values:
* `GOOGLE_KEY`: This is set to a base64-encoded service account key to
be used in the test.
* `GCS_SIGNING_PEM`: This is set to a base64-encoded signing key (in
PEM) that is used for signing URLs.
Both of these are available to all branches in the Nixery repository.
Implements a cache that keeps track of:
a) Manifests that have already been built (for up to 6 hours)
b) Layers that have already been seen (and uploaded to GCS)
This significantly speeds up response times for images that are full
or partial matches with previous images served by an instance.
Some upcoming changes might require the Nix build to be split into
multiple separate nix-build invocations of different expressions, thus
splitting this out is useful.
It also fixes an issue where `build-image/default.nix` might be called
in an environment where no Nix channels are configured.
ALl the ones except for build-image.nix are considered trivial. On the
latter, nixfmt makes some useful changes but by-and-large it is not
ready for that code yet.
Removes usage of the old layering algorithm and replaces it with the
new one.
Apart from the new layer layout this means that each layer is now
built in a separate derivation, which hopefully leads to better
cacheability.
Instead of requiring the server component to be made aware of the
location of the Nix builder via environment variables, this commit
introduces a wrapper script for the builder that can simply exist on
the builders $PATH.
This is one step towards a slightly nicer out-of-the-box experience
when using `nix-build -A nixery-bin`.
This commit adds the actual logic for extracting layer groupings and
merging them until the layer budget is satisfied.
The implementation conforms to the design doc as of the time of this
commit.
This script generates an entry in a text file for each time a
derivation is referred to by another in nixpkgs.
For initial testing, this output can be turned into group-layers
compatible JSON with this *trivial* invocation:
cat output | awk '{ print "{\"" $2 "\":" $1 "}"}' | jq -s '. | add | with_entries(.key |= sub("/nix/store/[a-z0-9]+-";""))' > test-data.json
As described in the design document, this adds considerations for
closure size and popularity. All closures meeting a certain threshold
for either value will have an extra edge from the image root to
themselves inserted in the graph, which will cause them to be
considered for inclusion in a separate layer.
This is preliminary because popularity is considered as a boolean
toggle (the input I generated only contains the top ~200 most popular
packages), but it should be using either absolute popularity values or
percentiles (needs some experimentation).
This uses a significantly larger percentage of the total available
layers (125) than before, which means that cache hits for layers
become more likely between images.
This page describes the various steps that Nixery goes through when
"procuring" an image.
The intention is to give users some more visibility into what is going
on and to make it clear that this is not just an image storage
service.
Executes the previously added mdBook on the previously added book
source to yield a directory that can be served by Nixery on its index
page.
This is one of those 'I <3 Nix' things due to how easy it is to do.
Uses mdBook[1] to generate a documentation overview page instead of
the previous HTML site.
This makes it possible to add more elaborate documentation without
having to deal with finicky markup.
[1]: https://github.com/rust-lang-nursery/mdBook
As described in issue #14, the registry API does not allow image names
with uppercase-characters in them.
However, the Nix package set has several top-level keys with uppercase
characters in them which could previously not be retrieved using
Nixery.
This change implements a method for retrieving those keys, but it is
explicitly only working for the top-level package set as nested
sets (such as `haskellPackages`) often contain packages that differ in
case only.
Google Cloud Storage supports granting access to protected objects via
time-restricted URLs that are cryptographically signed.
This makes it possible to store private data in buckets and to
distribute it to eligible clients without having to make those clients
aware of GCS authentication methods.
Nixery now uses this feature to sign URLs for GCS buckets when
returning layer URLs to clients on image pulls. This means that a
private Nixery instance can run a bucket with restricted access just
fine.
Under the hood Nixery uses a key provided via environment
variables to sign the URL with a 5 minute expiration time.
This can be set up by adding the following two environment variables:
* GCS_SIGNING_KEY: Path to the PEM file containing the signing key.
* GCS_SIGNING_ACCOUNT: Account ("e-mail" address) to use for signing.
If the variables are not set, the previous behaviour is not modified.
Previously the acknowledgement calls from Docker were receiving a
404 (which apparently doesn't bother it?!). This corrects the URL,
which meant that acknowledgement had to move inside of the
registryHandler.
Shiny, new domain is much better and eliminates the TLS redirect issue
because there is a HSTS preload for the entire .dev TLD (which, by the
way, is awesome!)
This might not yet be the final version, but it's going in the right
direction.
Additionally the favicon has been reduced to just the coloured Nix
logo, because details are pretty much invisible at that size anyways.
People should not start depending on the demo instance. There have
been discussions around making a NixOS-official instance, but the
project needs to mature a little bit first.
The MD5 sum is used for verifying contents in the layer cache before
accidentally re-uploading, but the syntax of the hash invocation was
incorrect leading to a cache-bust on the manifest layer on every
single build (even for identical images).
Uses the structured errors feature introduced in the Nix code to
return more sensible errors to clients. For now this is quite limited,
but already a lot better than before:
* packages that could not be found result in 404s
* all other errors result in 500s
This way the registry clients will not attempt to interpret the
returned garbage data/empty response as something useful.
Changes the return format of Nixery's build procedure to return a JSON
structure that can indicate which errors have occured.
The server can use this information to send appropriate status codes
back to clients.
Adds environment variables with which users can configure the package
set source to use. Not setting a source lets Nix default to a recent
NixOS channel (currently nixos-19.03).
This extends the package set import mechanism in
build-registry-image.nix with several different options:
1. Importing a nixpkgs channel from Github (the default, pinned to
nixos-19.03)
2. Importing a custom Nix git repository. This uses builtins.fetchGit
and can thus rely on git/SSH configuration in the environment (such
as keys)
3. Importing a local filesystem path
As long as the repository pointed at is either a checkout of nixpkgs,
or nixpkgs overlaid with custom packages this will work.
A special syntax has been defined for how these three options are
passed in, but users should not need to concern themselves with it as
it will be taken care of by the server component.
This relates to #3.
Adds git & SSH as part of the Nixery image, which are required to use
Nix's builtins.fetchGit.
The dependency on interactive tools is dropped, as it was only
required during development when debugging the image building process
itself.
Introduce an empty runtime configuration object in each built layer.
This is required because Kubernetes expects the configuration to be
present (even if it's just empty values).
Providing an empty configuration will make Docker's API return a full
configuration struct with default (i.e. empty) values rather than
`null`, which works for Kubernetes.
This fixes issue #1. See the issue for additional details.
Instead of just dispatching on URL regexes, use handlers to split the
routes into registry-related handlers and otherwise(tm).
For now the otherwise(tm) consists of a file server serving the static
directory, rather than just a plain match on the index route.
When running on AppEngine, the image is expected to be configured with
a default entry point / command.
This sets the command to the wrapper script, so that the image can
actually run properly when deployed.
Instead of using whatever the current system default is, import a Nix
channel when building an image.
This will use Nix' internal caching behaviour for tarballs fetched
without a SHA-hash.
For now the downloaded channel is pinned to nixos-19.03.
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
Introduces a wrapper script which automatically sets the paths to the
required runtime data dependencies.
Additionally configures a container image derivation which will output
a derivation with Nixery, Nix and other dependencies.
Previously the code had hardcoded paths to runtime data (the Nix
builder & web files), which have now been moved into configuration
options.
Additionally configuration for the application is now centralised in a
single config struct, an instance of which is passed around the
application.
This makes it possible to implement a wrapper in Nix that will
configure the runtime data locations automatically.
Rather than migrating to Bazel, it seems more appropriate to use Nix
for this project.
The project is split into several different components (for data
dependencies and binaries). A derivation for building an image for
Nixery itself will be added.