For the "stdenv accidentally referring to bootstrap-tools", it seems
easier to specify the path that we don't want to depend on, e.g.
disallowedRequisites = [ bootstrapTools ];
It turns out that using clone() to start a child process is unsafe in
a multithreaded program. It can cause the initialisation of a build
child process to hang in setgroups(), as seen several times in the
build farm:
The reason is that Glibc thinks that the other threads of the parent
exist in the child, so in setxid_mark_thread() it tries to get a futex
that has been acquired by another thread just before the clone(). With
fork(), Glibc runs pthread_atfork() handlers that take care of this
(in particular, __reclaim_stacks()). But clone() doesn't do that.
Fortunately, we can use fork()+unshare() instead of clone() to set up
private namespaces.
See also https://www.mail-archive.com/lxc-devel@lists.linuxcontainers.org/msg03434.html.
The Nixpkgs stdenv prints some custom escape sequences to denote
nesting and stuff like that. Most terminals (e.g. xterm, konsole)
ignore them, but some do not (e.g. xfce4-terminal). So for the benefit
of the latter, filter them out.
While running Python 3’s test suite, we noticed that on some systems
/dev/pts/ptmx is created with permissions 0 (that’s the case with my
Nixpkgs-originating 3.0.43 kernel, but someone with a Debian-originating
3.10-3 reported not having this problem.)
There’s still the problem that people without
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y are screwed (as noted in build.cc),
but I don’t see how we could work around it.
Since the addition of build-max-log-size, a call to
handleChildOutput() can result in cancellation of a goal. This
invalidated the "j" iterator in the waitForInput() loop, even though
it was still used afterwards. Likewise for the maxSilentTime
handling.
Probably fixes#231. At least it gets rid of the valgrind warnings.
The daemon now creates /dev deterministically (thanks!). However, it
expects /dev/kvm to be present.
The patch below restricts that requirement (1) to Linux-based systems,
and (2) to systems where /dev/kvm already exists.
I’m not sure about the way to handle (2). We could special-case
/dev/kvm and create it (instead of bind-mounting it) in the chroot, so
it’s always available; however, it wouldn’t help much since most likely,
if /dev/kvm missing, then KVM support is missing.
We were relying on SubstitutionGoal's destructor releasing the lock,
but if a goal is a top-level goal, the destructor won't run in a
timely manner since its reference count won't drop to zero. So
release it explicitly.
Fixes#178.
The flag ‘--check’ to ‘nix-store -r’ or ‘nix-build’ will cause Nix to
redo the build of a derivation whose output paths are already valid.
If the new output differs from the original output, an error is
printed. This makes it easier to test if a build is deterministic.
(Obviously this cannot catch all sources of non-determinism, but it
catches the most common one, namely the current time.)
For example:
$ nix-build '<nixpkgs>' -A patchelf
...
$ nix-build '<nixpkgs>' -A patchelf --check
error: derivation `/nix/store/1ipvxsdnbhl1rw6siz6x92s7sc8nwkkb-patchelf-0.6' may not be deterministic: hash mismatch in output `/nix/store/4pc1dmw5xkwmc6q3gdc9i5nbjl4dkjpp-patchelf-0.6.drv'
The --check build fails if not all outputs are valid. Thus the first
call to nix-build is necessary to ensure that all outputs are valid.
The current outputs are left untouched: the new outputs are either put
in a chroot or diverted to a different location in the store using
hash rewriting.
*headdesk*
*headdesk*
*headdesk*
So since commit 22144afa8d, Nix hasn't
actually checked whether the content of a downloaded NAR matches the
hash specified in the manifest / NAR info file. Urghhh...
On Linux, Nix can build i686 packages even on x86_64 systems. It's not
enough to recognize this situation by settings.thisSystem, we also have
to consult uname(). E.g. we can be running on a i686 Debian with an
amd64 kernel. In that situation settings.thisSystem is i686-linux, but
we still need to change personality to i686 to make builds consistent.
On a system with multiple CPUs, running Nix operations through the
daemon is significantly slower than "direct" mode:
$ NIX_REMOTE= nix-instantiate '<nixos>' -A system
real 0m0.974s
user 0m0.875s
sys 0m0.088s
$ NIX_REMOTE=daemon nix-instantiate '<nixos>' -A system
real 0m2.118s
user 0m1.463s
sys 0m0.218s
The main reason seems to be that the client and the worker get moved
to a different CPU after every call to the worker. This patch adds a
hack to lock them to the same CPU. With this, the overhead of going
through the daemon is very small:
$ NIX_REMOTE=daemon nix-instantiate '<nixos>' -A system
real 0m1.074s
user 0m0.809s
sys 0m0.098s
This reverts commit 69b8f9980f.
The timeout should be enforced remotely. Otherwise, if the garbage
collector is running either locally or remotely, if will block the
build or closure copying for some time. If the garbage collector
takes too long, the build may time out, which is not what we want.
Also, on heavily loaded systems, copying large paths to and from the
remote machine can take a long time, also potentially resulting in a
timeout.
mount(2) with MS_BIND allows mounting a regular file on top of a regular
file, so there's no reason to only bind directories. This allows finer
control over just which files are and aren't included in the chroot
without having to build symlink trees or the like.
Signed-off-by: Shea Levy <shea@shealevy.com>
With C++ std::map, doing a comparison like ‘map["foo"] == ...’ has the
side-effect of adding a mapping from "foo" to the empty string if
"foo" doesn't exist in the map. So we ended up setting some
environment variables by accident.
In particular this means that "trivial" derivations such as writeText
are not substituted, reducing the number of GET requests to the binary
cache by about 200 on a typical NixOS configuration.
Before calling dumpPath(), we have to make sure the files are owned by
the build user. Otherwise, the build could contain a hard link to
(say) /etc/shadow, which would then be read by the daemon and
rewritten as a world-readable file.
This only affects systems that don't have hard link restrictions
enabled.
The assertion in canonicalisePathMetaData() failed because the
ownership of the path already changed due to the hash rewriting. The
solution is not to check the ownership of rewritten paths.
Issue #122.
Otherwise subsequent invocations of "--repair" will keep rebuilding
the path. This only happens if the path content differs between
builds (e.g. due to timestamps).
Don't pass --timeout / --max-silent-time to the remote builder.
Instead, let the local Nix process terminate the build if it exceeds a
timeout. The remote builder will be killed as a side-effect. This
gives better error reporting (since the timeout message from the
remote side wasn't properly propagated) and handles non-Nix problems
like SSH hangs.
I'm not sure if it has ever worked correctly. The line "lastWait =
after;" seems to mean that the timer was reset every time a build
produced log output.
Note that the timeout is now per build, as documented ("the maximum
number of seconds that a builder can run").
It turns out that in multi-user Nix, a builder may be able to do
ln /etc/shadow $out/foo
Afterwards, canonicalisePathMetaData() will be applied to $out/foo,
causing /etc/shadow's mode to be set to 444 (readable by everybody but
writable by nobody). That's obviously Very Bad.
Fortunately, this fails in NixOS's default configuration because
/nix/store is a bind mount, so "ln" will fail with "Invalid
cross-device link". It also fails if hard-link restrictions are
enabled, so a workaround is:
echo 1 > /proc/sys/fs/protected_hardlinks
The solution is to check that all files in $out are owned by the build
user. This means that innocuous operations like "ln
${pkgs.foo}/some-file $out/" are now rejected, but that already failed
in chroot builds anyway.
...where <XX> is the first two characters of the derivation.
Otherwise /nix/var/log/nix/drvs may become so large that we run into
all sorts of weird filesystem limits/inefficiences. For instance,
ext3/ext4 filesystems will barf with "ext4_dx_add_entry:1551:
Directory index full!" once you hit a few million files.
If a derivation has multiple outputs, then we only want to download
those outputs that are actuallty needed. So if we do "nix-build -A
openssl.man", then only the "man" output should be downloaded.
Likewise if another package depends on ${openssl.man}.
The tricky part is that different derivations can depend on different
outputs of a given derivation, so we may need to restart the
corresponding derivation goal if that happens.
For example, given a derivation with outputs "out", "man" and "bin":
$ nix-build -A pkg
produces ./result pointing to the "out" output;
$ nix-build -A pkg.man
produces ./result-man pointing to the "man" output;
$ nix-build -A pkg.all
produces ./result, ./result-man and ./result-bin;
$ nix-build -A pkg.all -A pkg2
produces ./result, ./result-man, ./result-bin and ./result-2.
vfork() is just too weird. For instance, in this build:
http://hydra.nixos.org/build/3330487
the value fromHook.writeSide becomes corrupted in the parent, even
though the child only reads from it. At -O0 the problem goes away.
Probably the child is overriding some spilled temporary variable.
If I get bored I may implement using posix_spawn() instead.
With this flag, if any valid derivation output is missing or corrupt,
it will be recreated by using a substitute if available, or by
rebuilding the derivation. The latter may use hash rewriting if
chroots are not available.
This operation allows fixing corrupted or accidentally deleted store
paths by redownloading them using substituters, if available.
Since the corrupted path cannot be replaced atomically, there is a
very small time window (one system call) during which neither the old
(corrupted) nor the new (repaired) contents are available. So
repairing should be used with some care on critical packages like
Glibc.
Using the immutable bit is problematic, especially in conjunction with
store optimisation. For instance, if the garbage collector deletes a
file, it has to clear its immutable bit, but if the file has
additional hard links, we can't set the bit afterwards because we
don't know the remaining paths.
So now that we support having the entire Nix store as a read-only
mount, we may as well drop the immutable bit. Unfortunately, we have
to keep the code to clear the immutable bit for backwards
compatibility.
Note that this will only work if the client has a very recent Nix
version (post 15e1b2c223), otherwise the
--option flag will just be ignored.
Fixes#50.
This handles the chroot and build hook cases, which are easy.
Supporting the non-chroot-build case will require more work (hash
rewriting!).
Issue #21.
This is required on systemd, which mounts filesystems as "shared"
subtrees. Changes to shared trees in a private mount namespace are
propagated to the outside world, which is bad.
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
Instead make a single call to querySubstitutablePathInfo() per
derivation output. This is faster and prevents having to implement
the "have" function in the binary cache substituter.
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
In a private PID namespace, processes have PIDs that are separate from
the rest of the system. The initial child gets PID 1. Processes in
the chroot cannot see processes outside of the chroot. This improves
isolation between builds. However, processes on the outside can see
processes in the chroot and send signals to them (if they have
appropriate rights).
Since the builder gets PID 1, it serves as the reaper for zombies in
the chroot. This might turn out to be a problem. In that case we'll
need to have a small PID 1 process that sits in a loop calling wait().
In chroot builds, set the host name to "localhost" and the domain name
to "(none)" (the latter being the kernel's default). This improves
determinism a bit further.
P.S. I have to idea what UTS stands for.
This improves isolation a bit further, and it's just one extra flag in
the unshare() call.
P.S. It would be very cool to use CLONE_NEWPID (to put the builder in
a private PID namespace) as well, but that's slightly more risky since
having a builder start as PID 1 may cause problems.
On Linux it's possible to run a process in its own network namespace,
meaning that it gets its own set of network interfaces, disjunct from
the rest of the system. We use this to completely remove network
access to chroot builds, except that they get a private loopback
interface. This means that:
- Builders cannot connect to the outside network or to other processes
on the same machine, except processes within the same build.
- Vice versa, other processes cannot connect to processes in a chroot
build, and open ports/connections do not show up in "netstat".
- If two concurrent builders try to listen on the same port (e.g. as
part of a test), they no longer conflict with each other.
This was inspired by the "PrivateNetwork" flag in systemd.
There is a race condition when doing parallel builds with chroots and
the immutable bit enabled. One process may call makeImmutable()
before the other has called link(), in which case link() will fail
with EPERM. We could retry or wrap the operation in a lock, but since
this condition is rare and I'm lazy, we just use the existing copy
fallback.
Fixes#9.
Setting the UNAME26 personality causes "uname" to return "2.6.x",
regardless of the kernel version. This improves determinism in
a few misbehaved packages.
The variable ‘useChroot’ was not initialised properly. This caused
random failures if using the build hook. Seen on Mac OS X 10.7 with Clang.
Thanks to KolibriFX for finding this :-)
Chroots are initialised by hard-linking inputs from the Nix store to
the chroot. This doesn't work if the input has its immutable bit set,
because it's forbidden to create hard links to immutable files. So
temporarily clear the immutable bit when creating and destroying the
chroot.
Note that making regular files in the Nix store immutable isn't very
reliable, since the bit can easily become cleared: for instance, if we
run the garbage collector after running ‘nix-store --optimise’. So
maybe we should only make directories immutable.
This should also fix:
nix-instantiate: ./../boost/shared_ptr.hpp:254: T* boost::shared_ptr<T>::operator->() const [with T = nix::StoreAPI]: Assertion `px != 0' failed.
which was caused by hashDerivationModulo() calling the ‘store’
object (during store upgrades) before openStore() assigned it.
derivations added to the store by clients have "correct" output
paths (meaning that the output paths are computed by hashing the
derivation according to a certain algorithm). This means that a
malicious user could craft a special .drv file to build *any*
desired path in the store with any desired contents (so long as the
path doesn't already exist). Then the attacker just needs to wait
for a victim to come along and install the compromised path.
For instance, if Alice (the attacker) knows that the latest Firefox
derivation in Nixpkgs produces the path
/nix/store/1a5nyfd4ajxbyy97r1fslhgrv70gj8a7-firefox-5.0.1
then (provided this path doesn't already exist) she can craft a .drv
file that creates that path (i.e., has it as one of its outputs),
add it to the store using "nix-store --add", and build it with
"nix-store -r". So the fake .drv could write a Trojan to the
Firefox path. Then, if user Bob (the victim) comes along and does
$ nix-env -i firefox
$ firefox
he executes the Trojan injected by Alice.
The fix is to have the Nix daemon verify that derivation outputs are
correct (in addValidPath()). This required some refactoring to move
the hash computation code to libstore.
hook script proper, and the stdout/stderr of the builder. Only the
latter should be saved in /nix/var/log/nix/drvs.
* Allow the verbosity to be set through an option.
* Added a flag --quiet to lower the verbosity level.
it requires a certain feature on the build machine, e.g.
requiredSystemFeatures = [ "kvm" ];
We need this in Hydra to make sure that builds that require KVM
support are forwarded to machines that have KVM support. Probably
this should also be enforced for local builds.
the hook every time we want to ask whether we can run a remote build
(which can be very often), we now reuse a hook process for answering
those queries until it accepts a build. So if there are N
derivations to be built, at most N hooks will be started.
using the build hook mechanism, by setting the derivation attribute
"preferLocalBuild" to true. This has a few use cases:
- The user environment builder. Since it just creates a bunch of
symlinks without much computation, there is no reason to do it
remotely. In fact, doing it remotely requires the entire closure
of the user environment to be copied to the remote machine, which
is extremely wasteful.
- `fetchurl'. Performing the download on a remote machine and then
copying it to the local machine involves twice as much network
traffic as performing the download locally, and doesn't save any
CPU cycles on the local machine.
An "using namespace std" was added locally in those functions that refer to
names from <cstring>. That is not pretty, but it's a very portable solution,
because strcpy() and friends will be found in both the 'std' and in the global
namespace.
This patch adds the configuration file variable "build-cores" and the
command line argument "--cores". These settings specify the number of
CPU cores to utilize for parallel building within a job, i.e. by passing
an appropriate "-j" flag to GNU Make. The default value is 1, which
means that parallel building is *disabled*. If the number of build cores
is specified as 0 (synonymously: "guess" or "auto"), then the actual
value is supposed to be auto-detected by builders at run-time, i.e by
calling the nproc(1) utility from coreutils.
The environment variable $NIX_BUILD_CORES is available to builders, but
the contents of that variable does *not* influence the hash that goes
into the $out store path, i.e. the number of build cores to be utilized
can be changed at will without requiring any re-builds.
_FILE_OFFSET_BITS=64. Without it, functions like stat() fail on
large file sizes. This happened with a Nix store on squashfs:
$ nix-store --dump /tmp/mnt/46wzqnk4cbdwh1dclhrpqnnz1icak6n7-local-net-cmds > /dev/null
error: getting attributes of path `/tmp/mnt/46wzqnk4cbdwh1dclhrpqnnz1icak6n7-local-net-cmds': Value too large for defined data type
$ stat /tmp/mnt/46wzqnk4cbdwh1dclhrpqnnz1icak6n7-local-net-cmds
File: `/tmp/mnt/46wzqnk4cbdwh1dclhrpqnnz1icak6n7-local-net-cmds'
Size: 0 Blocks: 36028797018963968 IO Block: 1024 regular empty file
(This is a bug in squashfs or mksquashfs, but it shouldn't cause Nix
to fail.)
(that is, call the build hook with a certain interval until it
accepts the build).
* build-remote.pl was totally broken: for all system types other than
the local system type, it would send all builds to the *first*
machine of the appropriate type.
poll for it (i.e. if we can't acquire the lock, then let the main
select() loop wait for at most a few seconds and then try again).
This improves parallelism: if two nix-store processes are both
trying to build a path at the same time, the second one shouldn't
block; it should first see if it can build other goals. Also, it
prevents the deadlocks that have been occuring in Hydra lately,
where a process waits for a lock held by another process that's
waiting for a lock held by the first.
The downside is that polling isn't really elegant, but POSIX doesn't
provide a way to wait for locks in a select() loop. The only
solution would be to spawn a thread for each lock to do a blocking
fcntl() and then signal the main thread, but that would require
pthreads.
sure that it works as expected when you pass it a derivation. That
is, we have to make sure that all build-time dependencies are built,
and that they are all in the input closure (otherwise remote builds
might fail, for example). This is ensured at instantiation time by
adding all derivations and their sources to inputDrvs and inputSrcs.
hook. This fixes a problem with log files being partially or
completely filled with 0's because another nix-store process
truncates the log file. It should also be more efficient.
the DerivationGoal runs. Otherwise, if a goal is a top-level goal,
then the lock won't be released until nix-store finishes. With
--keep-going and lots of top-level goals, it's possible to run out
of file descriptors (this happened sometimes in the build farm for
Nixpkgs). Also, for failed derivation, it won't be possible to
build it again until the lock is released.
* Idem for locks on build users: these weren't released in a timely
manner for failed top-level derivation goals. So if there were more
than (say) 10 such failed builds, you would get an error about
having run out of build users.
scan for runtime dependencies (i.e. the local machine shouldn't do a
scan that the remote machine has already done). Also pipe directly
into `nix-store --import': don't use a temporary file.
(e.g. an SSH connection problem) and permanent failures (i.e. the
builder failed). This matters to Hydra (it wants to know whether it
makes sense to retry a build).
closure of the inputs. This really enforces that there can't be any
undeclared dependencies on paths in the store. This is done by
creating a fake Nix store and creating bind-mounts or hard-links in
the fake store for all paths in the closure. After the build, the
build output is moved from the fake store to the real store. TODO:
the chroot has to be on the same filesystem as the Nix store for
this to work, but this isn't enforced yet. (I.e. it only works
currently if /tmp is on the same FS as /nix/store.)
bind-mounts we do are only visible to the builder process and its
children. So accidentally doing "rm -rf" on the chroot directory
won't wipe out /nix/store and other bind-mounted directories
anymore. Also, the bind-mounts in the private namespace disappear
automatically when the builder exits.
necessary that at least one build hook doesn't return "postpone",
otherwise nix-store will barf ("waiting for a build slot, yet there
are no running children"). So inform the build hook when this is
the case, so that it can start a build even when that would exceed
the maximum load on a machine.
subtle and often hard-to-reproduce bugs where programs in pipes
either barf with a "Broken pipe" message or not, depending on the
exact timing conditions. This particularly happened in GNU M4 (and
Bison, which uses M4).
disasters involving `rm -rf' on bind mounts. Will try the
definitive fix (per-process mounts, apparently possible via the
CLONE_NEWNS flag in clone()) some other time.
particular, dietlibc cannot figure out the cwd because the inode of
the current directory doesn't appear in .. (because getdents returns
the inode of the mount point).
need /etc in the chroot (in particular, /etc/resolv.conf for
fetchurl). Not having /etc/resolv.conf in the chroot is a good
thing, since we don't want normal derivations to download files.
again. (After the previous substituter mechanism refactoring I
didn't update the code that obtains the references of substitutable
paths.) This required some refactoring: the substituter programs
are now kept running and receive/respond to info requests via
stdin/stdout.
/tmp/nix-<pid>-<counter> for temporary build directories. This
increases purity a bit: many packages store the temporary build path
in their output, causing (generally unimportant) binary differences.
executed in a chroot that contains just the Nix store, the temporary
build directory, and a configurable set of additional directories
(/dev and /proc by default). This allows a bit more purity
enforcement: hidden build-time dependencies on directories such as
/usr or /nix/var/nix/profiles are no longer possible. As an added
benefit, accidental network downloads (cf. NIXPKGS-52) are prevented
as well (because files such as /etc/resolv.conf are not available in
the chroot).
However the usefulness of chroots is diminished by the fact that
many builders depend on /bin/sh, so you need /bin in the list of
additional directories. (And then on non-NixOS you need /lib as
well...)
fixed-output derivations or substitutions try to build the same
store path at the same time. Locking generally catches this, but
not between multiple goals in the same process. This happened
especially often (actually, only) in the build farm with fetchurl
downloads of the same file being executed on multiple machines and
then copied back to the main machine where they would clobber each
other (NIXBF-13).
Solution: if a goal notices that the output path is already locked,
then go to sleep until another goal finishes (hopefully the one
locking the path) and try again.
need any info on substitutable paths, we just call the substituters
(such as download-using-manifests.pl) directly. This means that
it's no longer necessary for nix-pull to register substitutes or for
nix-channel to clear them, which makes those operations much faster
(NIX-95). Also, we don't have to worry about keeping nix-pull
manifests (in /nix/var/nix/manifests) and the database in sync with
each other.
The downside is that there is some overhead in calling an external
program to get the substitutes info. For instance, "nix-env -qas"
takes a bit longer.
Abolishing the substitutes table also makes the logic in
local-store.cc simpler, as we don't need to store info for invalid
paths. On the downside, you cannot do things like "nix-store -qR"
on a substitutable but invalid path (but nobody did that anyway).
* Never catch interrupts (the Interrupted exception).
from a source directory. All files for which a predicate function
returns true are copied to the store. Typical example is to leave
out the .svn directory:
stdenv.mkDerivation {
...
src = builtins.filterSource
(path: baseNameOf (toString path) != ".svn")
./source-dir;
# as opposed to
# src = ./source-dir;
}
This is important because the .svn directory influences the hash in
a rather unpredictable and variable way.
* Throw more exceptions as BuildErrors instead of Errors. This
matters when --keep-going is turned on. (A BuildError is caught
and terminates the goal in question, an Error terminates the
program.)
seconds without producing output on stdout or stderr (NIX-65). This
timeout can be specified using the `--max-silent-time' option or the
`build-max-silent-time' configuration setting. The default is
infinity (0).
* Fix a tricky race condition: if we kill the build user before the
child has done its setuid() to the build user uid, then it won't be
killed, and we'll potentially lock up in pid.wait(). So also send a
conventional kill to the child.