tvl-depot/tools/nixery
Vincent Ambo 6dd0ac3189 feat(nix): Import nixpkgs from a configured Nix channel
Instead of using whatever the current system default is, import a Nix
channel when building an image.

This will use Nix' internal caching behaviour for tarballs fetched
without a SHA-hash.

For now the downloaded channel is pinned to nixos-19.03.
2019-07-24 17:53:08 +00:00
..
static feat(nix): Import nixpkgs from a configured Nix channel 2019-07-24 17:53:08 +00:00
.gitignore chore: Add gitignore to ignore Nix build results 2019-07-23 23:33:22 +01:00
build-registry-image.nix feat(nix): Import nixpkgs from a configured Nix channel 2019-07-24 17:53:08 +00:00
CONTRIBUTING.md chore: Add license scaffolding & contribution guidelines 2019-07-23 23:32:56 +01:00
default.nix feat(build): Configure Nixery image builder to set up env correctly 2019-07-24 17:46:39 +00:00
go-deps.nix feat(build): Introduce build configuration using Nix 2019-07-23 21:48:27 +01:00
LICENSE chore: Add license scaffolding & contribution guidelines 2019-07-23 23:32:56 +01:00
main.go chore: Add license scaffolding & contribution guidelines 2019-07-23 23:32:56 +01:00
README.md chore: Import Nixery from experimental 2019-07-23 20:53:38 +01:00

Nixery

This package implements a Docker-compatible container registry that is capable of transparently building and serving container images using Nix.

The project started out with the intention of becoming a Kubernetes controller that can serve declarative image specifications specified in CRDs as container images. The design for this is outlined in a public gist.

Currently it focuses on the ad-hoc creation of container images as outlined below with an example instance available at nixery.appspot.com.

This is not an officially supported Google project.

Ad-hoc container images

Nixery supports building images on-demand based on the image name. Every package that the user intends to include in the image is specified as a path component of the image name.

The path components refer to top-level keys in nixpkgs and are used to build a container image using Nix's buildLayeredImage functionality.

The special meta-package shell provides an image base with many core components (such as bash and coreutils) that users commonly expect in interactive images.

Usage example

Using the publicly available Nixery instance at nixery.appspot.com, one could retrieve a container image containing curl and an interactive shell like this:

tazjin@tazbox:~$ sudo docker run -ti nixery.appspot.com/shell/curl bash
Unable to find image 'nixery.appspot.com/shell/curl:latest' locally
latest: Pulling from shell/curl
7734b79e1ba1: Already exists
b0d2008d18cd: Pull complete
< ... some layers omitted ...>
Digest: sha256:178270bfe84f74548b6a43347d73524e5c2636875b673675db1547ec427cf302
Status: Downloaded newer image for nixery.appspot.com/shell/curl:latest
bash-4.4# curl --version
curl 7.64.0 (x86_64-pc-linux-gnu) libcurl/7.64.0 OpenSSL/1.0.2q zlib/1.2.11 libssh2/1.8.0 nghttp2/1.35.1

Known issues

  • Initial build times for an image can be somewhat slow while Nixery retrieves the required derivations from the Nix cache under-the-hood.

    Due to how the Docker Registry API works, there is no way to provide feedback to the user during this period - hence the UX (in interactive mode) is currently that "nothing is happening" for a while after the Unable to find image message is printed.

  • For some reason these images do not currently work in GKE clusters. Launching a Kubernetes pod that uses a Nixery image results in an error stating unable to convert a nil pointer to a runtime API image: ImageInspectError.

    This error comes from here and it occurs after the Kubernetes node has retrieved the image from Nixery (as per the Nixery logs).

Kubernetes integration (in the future)

Note: The Kubernetes integration is not yet implemented.

The basic idea of the Kubernetes integration is to provide a way for users to specify the contents of a container image as an API object in Kubernetes which will be transparently built by Nix when the container is started up.

For example, given a resource that looks like this:

---
apiVersion: k8s.nixos.org/v1alpha
kind: NixImage
metadata:
  name: curl-and-jq
data:
  tag: v1
  contents:
    - curl
    - jq
    - bash

One could create a container that references the curl-and-jq image, which will then be created by Nix when the container image is pulled.

The controller itself runs as a daemonset on every node in the cluster, providing a host-mounted /nix/store-folder for caching purposes.