2019-07-24 00:32:56 +02:00
|
|
|
# Copyright 2019 Google LLC
|
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# https://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
2019-09-30 15:19:11 +02:00
|
|
|
|
2019-08-19 02:10:21 +02:00
|
|
|
{ pkgs ? import <nixpkgs> { }
|
|
|
|
, preLaunch ? ""
|
|
|
|
, extraPackages ? [] }:
|
2019-07-23 22:48:27 +02:00
|
|
|
|
|
|
|
with pkgs;
|
|
|
|
|
2019-10-27 15:33:14 +01:00
|
|
|
let
|
2019-10-05 23:33:41 +02:00
|
|
|
# Hash of all Nixery sources - this is used as the Nixery version in
|
|
|
|
# builds to distinguish errors between deployed versions, see
|
|
|
|
# server/logs.go for details.
|
|
|
|
nixery-src-hash = pkgs.runCommand "nixery-src-hash" {} ''
|
2019-10-11 14:55:10 +02:00
|
|
|
echo ${./.} | grep -Eo '[a-z0-9]{32}' | head -c 32 > $out
|
2019-10-05 23:33:41 +02:00
|
|
|
'';
|
|
|
|
|
2019-07-23 22:48:27 +02:00
|
|
|
# Go implementation of the Nixery server which implements the
|
|
|
|
# container registry interface.
|
|
|
|
#
|
2019-10-27 15:33:14 +01:00
|
|
|
# Users should use the nixery-bin derivation below instead.
|
2019-10-05 23:33:41 +02:00
|
|
|
nixery-server = callPackage ./server {
|
|
|
|
srcHash = nixery-src-hash;
|
|
|
|
};
|
2019-10-27 15:33:14 +01:00
|
|
|
in rec {
|
2019-09-29 23:58:52 +02:00
|
|
|
# Implementation of the Nix image building logic
|
2019-10-03 23:16:37 +02:00
|
|
|
nixery-build-image = import ./build-image { inherit pkgs; };
|
2019-08-04 23:43:22 +02:00
|
|
|
|
2019-08-04 23:45:23 +02:00
|
|
|
# Use mdBook to build a static asset page which Nixery can then
|
|
|
|
# serve. This is primarily used for the public instance at
|
|
|
|
# nixery.dev.
|
2019-08-13 01:35:42 +02:00
|
|
|
nixery-book = callPackage ./docs { };
|
2019-07-24 00:22:18 +02:00
|
|
|
|
|
|
|
# Wrapper script running the Nixery server with the above two data
|
|
|
|
# dependencies configured.
|
|
|
|
#
|
|
|
|
# In most cases, this will be the derivation a user wants if they
|
|
|
|
# are installing Nixery directly.
|
|
|
|
nixery-bin = writeShellScriptBin "nixery" ''
|
2019-08-04 23:45:23 +02:00
|
|
|
export WEB_DIR="${nixery-book}"
|
2019-10-03 23:16:37 +02:00
|
|
|
export PATH="${nixery-build-image}/bin:$PATH"
|
2019-10-05 15:54:49 +02:00
|
|
|
exec ${nixery-server}/bin/server
|
2019-07-24 00:22:18 +02:00
|
|
|
'';
|
|
|
|
|
2019-10-31 18:54:31 +01:00
|
|
|
nixery-popcount = callPackage ./popcount { };
|
|
|
|
|
2019-07-24 00:22:18 +02:00
|
|
|
# Container image containing Nixery and Nix itself. This image can
|
|
|
|
# be run on Kubernetes, published on AppEngine or whatever else is
|
|
|
|
# desired.
|
feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
2019-07-24 19:46:39 +02:00
|
|
|
nixery-image = let
|
|
|
|
# Wrapper script for the wrapper script (meta!) which configures
|
|
|
|
# the container environment appropriately.
|
|
|
|
#
|
|
|
|
# Most importantly, sandboxing is disabled to avoid privilege
|
|
|
|
# issues in containers.
|
|
|
|
nixery-launch-script = writeShellScriptBin "nixery" ''
|
|
|
|
set -e
|
2019-09-21 16:15:14 +02:00
|
|
|
export PATH=${coreutils}/bin:$PATH
|
feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
2019-07-24 19:46:39 +02:00
|
|
|
export NIX_SSL_CERT_FILE=/etc/ssl/certs/ca-bundle.crt
|
2019-09-21 16:15:14 +02:00
|
|
|
mkdir -p /tmp
|
feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
2019-07-24 19:46:39 +02:00
|
|
|
|
|
|
|
# Create the build user/group required by Nix
|
|
|
|
echo 'nixbld:x:30000:nixbld' >> /etc/group
|
|
|
|
echo 'nixbld:x:30000:30000:nixbld:/tmp:/bin/bash' >> /etc/passwd
|
2019-08-17 11:29:56 +02:00
|
|
|
echo 'root:x:0:0:root:/root:/bin/bash' >> /etc/passwd
|
|
|
|
echo 'root:x:0:' >> /etc/group
|
feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
2019-07-24 19:46:39 +02:00
|
|
|
|
|
|
|
# Disable sandboxing to avoid running into privilege issues
|
|
|
|
mkdir -p /etc/nix
|
|
|
|
echo 'sandbox = false' >> /etc/nix/nix.conf
|
|
|
|
|
2019-08-04 23:45:23 +02:00
|
|
|
# In some cases users building their own image might want to
|
|
|
|
# customise something on the inside (e.g. set up an environment
|
|
|
|
# for keys or whatever).
|
|
|
|
#
|
|
|
|
# This can be achieved by setting a 'preLaunch' script.
|
2019-08-04 01:48:52 +02:00
|
|
|
${preLaunch}
|
|
|
|
|
feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
2019-07-24 19:46:39 +02:00
|
|
|
exec ${nixery-bin}/bin/nixery
|
|
|
|
'';
|
|
|
|
in dockerTools.buildLayeredImage {
|
2019-07-24 00:22:18 +02:00
|
|
|
name = "nixery";
|
2019-08-13 01:35:42 +02:00
|
|
|
config.Cmd = [ "${nixery-launch-script}/bin/nixery" ];
|
2019-08-08 21:57:11 +02:00
|
|
|
maxLayers = 96;
|
2019-07-24 00:22:18 +02:00
|
|
|
contents = [
|
2019-08-17 11:29:56 +02:00
|
|
|
bashInteractive
|
feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
2019-07-24 19:46:39 +02:00
|
|
|
cacert
|
2019-08-02 02:28:55 +02:00
|
|
|
coreutils
|
2019-07-31 01:39:31 +02:00
|
|
|
git
|
feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
2019-07-24 19:46:39 +02:00
|
|
|
gnutar
|
|
|
|
gzip
|
2019-08-21 11:22:44 +02:00
|
|
|
iana-etc
|
2019-07-31 01:39:31 +02:00
|
|
|
nix
|
2019-08-12 18:14:00 +02:00
|
|
|
nixery-build-image
|
2019-07-31 01:39:31 +02:00
|
|
|
nixery-launch-script
|
|
|
|
openssh
|
2019-08-17 11:29:56 +02:00
|
|
|
zlib
|
2019-08-19 02:10:21 +02:00
|
|
|
] ++ extraPackages;
|
2019-07-24 00:22:18 +02:00
|
|
|
};
|
2019-07-23 22:48:27 +02:00
|
|
|
}
|