Compare commits

...

44 commits

Author SHA1 Message Date
f20353b727 fix(storage01): pass through the admin API of Garage
not the web API!

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-10 17:52:22 +02:00
a4de5f4d31 feat(krz01): move ollama to compute01 via a reverse proxy
krz01 has no public web IP.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-10 17:40:56 +02:00
363f8d3c67 fix(krz01): open 80/443 for ACME
Oopsie!

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-10 17:20:28 +02:00
12b20e6acf feat(storage01): add monorepo-terraform-state.s3.dgnum.eu
This is required to bootstrap the Terranix setup.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-10 17:13:11 +02:00
de6742aa0d feat(storage01): add s3-admin.dgnum.eu
This is the administration endpoint of the S3, you can create new
buckets and more, from there.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-10 17:13:11 +02:00
d76e655174 feat(krz01): add a NGINX in front of ollama protected by password
This way, you can do direct requests to ollama from other places.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-10 16:43:33 +02:00
sinavir
7d70beb1f0 feat(krz01): create and add the lab admin group to krz01 2024-10-10 13:35:34 +02:00
dae3b7c7f6
fix(web02): Remove test user 2024-10-10 09:41:58 +02:00
1e71ef3636
feat(users): Add root passwords and deactivate mutableUsers 2024-10-10 09:23:19 +02:00
7bdc70632c
chore(scripts): Cleanup of old caching script 2024-10-10 01:04:16 +02:00
d05c003fd6
feat(workflows/eval): Generalize the new script to all nodes 2024-10-10 00:58:41 +02:00
5b271b7b4a
feat(nat): enabling for dgnum members for tests 2024-10-10 00:00:56 +02:00
93c47f47be
fix: laptop change 2024-10-09 23:47:29 +02:00
47ad002f12
feat(workflows/eval): Try to build and upload in one fell swoop 2024-10-09 21:34:22 +02:00
6b23df6b54
feat(workflows/eval): Try to build and upload in one fell swoop 2024-10-09 21:32:38 +02:00
6c4099d369 feat(infra): Internalize nix-lib, and make keys management simpler 2024-10-09 18:58:46 +02:00
53c865a335
fix(dgsi): Set to an existing version 2024-10-09 18:57:06 +02:00
34640d467b feat(krz01): finish ollama integration and whisper.cpp
My sanity was used in the process.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-09 13:59:05 +02:00
8441992408 feat(krz01): move to unstable
Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-09 12:51:14 +02:00
4bedb3f497 feat(krz01): move the GPU stuff to the host for now
We also add a K80 specific patch for ollama.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-09 09:33:57 +02:00
8160b2762f feat(krz01): passthrough the nVidia Tesla K80 in ml01
This way, no need for reboot.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-09 09:33:57 +02:00
ebed6462f6 feat(krz01): introduce ML01 -- a machine learning VM
I will add ollama on it later on and passthrough the GPU in there.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-09 09:33:57 +02:00
e200ae53a4
feat(proxmox): Revert the disabling 2024-10-08 20:59:34 +02:00
62b36ed124 fix(krz01): apply a correctness patch on proxmox-nixos
To make CI happy.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-08 20:49:41 +02:00
9bc651db42
chore(nix-patches): Add helper function 2024-10-08 20:49:26 +02:00
bfe4957926
feat(patches): Generalize 2024-10-08 18:37:17 +02:00
3aeae4e33f feat(krz01): add basic microvm exprs
For a router01.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-08 17:14:23 +02:00
4d689fee33 feat(krz01): enable proprietary drivers for nVidia
For the Tesla K80.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-08 16:58:47 +02:00
862f004e3c fix(krz01): disable proxmox for now
Until #139 is merged.

Signed-off-by: Ryan Lahfa <ryan@dgnum.eu>
2024-10-08 16:40:18 +02:00
sinavir
da40fa9b3d fix(krz01): Fix root password hash 2024-10-08 16:05:19 +02:00
c642e98ab9
fix(cache): Make instructions and code work 2024-10-08 15:50:21 +02:00
fb610306ee
feat(workflows/eval): Add krz01 to the CI 2024-10-08 14:15:01 +02:00
37d0ca9489
chore(dgsi): Update? 2024-10-08 14:13:03 +02:00
sinavir
39f5cad75d feat(krz01): Proxmox 2024-10-08 13:59:28 +02:00
sinavir
c6588da802 fix(krz01): Use default target 2024-10-08 12:57:57 +02:00
sinavir
a194da9662 fix(krz01): Enable netbird 2024-10-08 12:51:57 +02:00
sinavir
70c69346fb feat(krz01): init 2024-10-08 12:35:59 +02:00
sinavir
bdf0e4cf7a feat(binary-cache): Add some hints on how to configure the cache 2024-10-06 23:57:57 +02:00
e4fc6a0d98
chore(npins): Update 2024-10-06 22:21:07 +02:00
8769d6738e
fix(cas-eleves): Remove dependency on pytest-runner 2024-10-06 18:40:48 +02:00
7d24e2dfc1
feat(dgsi): Update, with SAML provisional auth 2024-10-06 18:40:48 +02:00
sinavir
38231eb6e0 feat(attic): Bye bye attic 2024-10-06 18:33:04 +02:00
f589be422e
fix(meta): Use root@ for the proxyjump to bridge01 2024-10-03 12:57:43 +02:00
sinavir
e70d0be931 chore(garage): update 2024-10-02 19:20:17 +02:00
89 changed files with 2065 additions and 632 deletions

View file

@ -9,281 +9,192 @@ on:
- main - main
jobs: jobs:
build_compute01: build_and_cache_krz01:
runs-on: nix runs-on: nix
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Build compute01 - name: Build and cache the node
run: | run: nix-shell --run cache-node
# Enter the shell
nix-shell --run 'colmena build --on compute01'
build_storage01:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build storage01
run: |
# Enter the shell
nix-shell --run 'colmena build --on storage01'
build_vault01:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build vault01
run: |
# Enter the shell
nix-shell --run 'colmena build --on vault01'
build_web01:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build web01
run: |
# Enter the shell
nix-shell --run 'colmena build --on web01'
build_web02:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build web02
run: |
# Enter the shell
nix-shell --run 'colmena build --on web02'
build_rescue01:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build rescue01
run: |
# Enter the shell
nix-shell --run 'colmena build --on rescue01'
build_geo01:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build geo01
run: |
# Enter the shell
nix-shell --run 'colmena build --on geo01'
build_geo02:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build geo02
run: |
# Enter the shell
nix-shell --run 'colmena build --on geo02'
build_bridge01:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build bridge01
run: |
# Enter the shell
nix-shell --run 'colmena build --on bridge01'
push_to_cache_compute01:
runs-on: nix
needs:
- build_compute01
steps:
- uses: actions/checkout@v3
- name: Push to cache
run: nix-shell --run push-to-nix-cache
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "compute01" ]' BUILD_NODE: "krz01"
- uses: actions/upload-artifact@v3
if: always()
with:
name: outputs_krz01
path: paths.txt
build_and_cache_compute01:
runs-on: nix
steps:
- uses: actions/checkout@v3
- name: Build and cache the node
run: nix-shell --run cache-node
env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
BUILD_NODE: "compute01"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_compute01 name: outputs_compute01
path: uploaded.txt path: paths.txt
push_to_cache_storage01: build_and_cache_storage01:
runs-on: nix runs-on: nix
needs:
- build_storage01
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "storage01" ]' BUILD_NODE: "storage01"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_storage01 name: outputs_storage01
path: uploaded.txt path: paths.txt
push_to_cache_rescue01: build_and_cache_rescue01:
runs-on: nix runs-on: nix
needs:
- build_rescue01
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "rescue01" ]' BUILD_NODE: "rescue01"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_rescue01 name: outputs_rescue01
path: uploaded.txt path: paths.txt
push_to_cache_geo01: build_and_cache_geo01:
runs-on: nix runs-on: nix
needs:
- build_geo01
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "geo01" ]' BUILD_NODE: "geo01"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_geo01 name: outputs_geo01
path: uploaded.txt path: paths.txt
push_to_cache_geo02: build_and_cache_geo02:
runs-on: nix runs-on: nix
needs:
- build_geo02
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "geo02" ]' BUILD_NODE: "geo02"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_geo02 name: outputs_geo02
path: uploaded.txt path: paths.txt
push_to_cache_vault01: build_and_cache_vault01:
runs-on: nix runs-on: nix
needs:
- build_vault01
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "vault01" ]' BUILD_NODE: "vault01"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_vault01 name: outputs_vault01
path: uploaded.txt path: paths.txt
push_to_cache_web01: build_and_cache_web01:
runs-on: nix runs-on: nix
needs:
- build_web01
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "web01" ]' BUILD_NODE: "web01"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_web01 name: outputs_web01
path: uploaded.txt path: paths.txt
push_to_cache_web02: build_and_cache_web02:
runs-on: nix runs-on: nix
needs:
- build_web02
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "web02" ]' BUILD_NODE: "web02"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_web02 name: outputs_web02
path: uploaded.txt path: paths.txt
push_to_cache_bridge01: build_and_cache_bridge01:
runs-on: nix runs-on: nix
needs:
- build_bridge01
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Push to cache - name: Build and cache the node
run: nix-shell --run push-to-nix-cache run: nix-shell --run cache-node
env: env:
STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/" STORE_ENDPOINT: "https://tvix-store.dgnum.eu/infra-signing/"
STORE_USER: "admin" STORE_USER: "admin"
STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }} STORE_PASSWORD: ${{ secrets.STORE_PASSWORD }}
NODES: '[ "bridge01" ]' BUILD_NODE: "bridge01"
- uses: actions/upload-artifact@v3 - uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: outputs_web02 name: outputs_web02
path: uploaded.txt path: paths.txt

View file

@ -9,6 +9,21 @@ You're expected to read this document before commiting to the repo.
Some documentation for the development tools are provided in the aforementioned file. Some documentation for the development tools are provided in the aforementioned file.
# Using the binary cache
Add the following module to your configuration (and pin this repo using your favorite tool: npins, lon, etc...):
```
{ lib, ... }:
let
dgnum-infra = PINNED_PATH_TO_INFRA;
in {
nix.settings = (import dgnum-infra { }).mkCacheSettings {
caches = [ "infra" ];
};
}
```
# Adding a new machine # Adding a new machine
The first step is to create a minimal viable NixOS host, using tha means necessary. The first step is to create a minimal viable NixOS host, using tha means necessary.
@ -19,7 +34,7 @@ The second step is to find a name for this host, it must be unique from the othe
## Download the keys ## Download the keys
The public SSH keys of `host02` have to be saved to `keys/machines/host02.keys`, preferably only the `ssh-ed25519` one. The public SSH keys of `host02` have to be saved to `keys`, preferably only the `ssh-ed25519` one.
It can be retreived with : It can be retreived with :
@ -76,11 +91,9 @@ The general metadata is declared in `meta/nodes.nix`, the main values to declare
Create the directory `secrets` in the configuration folder, and add a `secrets.nix` file containing : Create the directory `secrets` in the configuration folder, and add a `secrets.nix` file containing :
```nix ```nix
let (import ../../../keys).mkSecrets [ "host02" ] [
lib = import ../../../lib { }; # List of secrets for host02
in ]
lib.setDefault { publicKeys = lib.getNodeKeys "host02"; } [ ]
``` ```
This will be used for future secret management. This will be used for future secret management.

View file

@ -76,6 +76,8 @@ in
dns = import ./meta/dns.nix; dns = import ./meta/dns.nix;
mkCacheSettings = import ./machines/storage01/tvix-cache/cache-settings.nix;
shells = { shells = {
default = pkgs.mkShell { default = pkgs.mkShell {
name = "dgnum-infra"; name = "dgnum-infra";
@ -85,7 +87,6 @@ in
version = "1.8.0-unstable"; version = "1.8.0-unstable";
src = builtins.storePath sources.nixos-generators; src = builtins.storePath sources.nixos-generators;
})) }))
pkgs.attic-client
pkgs.npins pkgs.npins
(pkgs.callPackage ./lib/colmena { inherit (nix-pkgs) colmena; }) (pkgs.callPackage ./lib/colmena { inherit (nix-pkgs) colmena; })

View file

@ -1,24 +1,25 @@
let let
sources = import ./npins; sources' = import ./npins;
lib = import (sources.nix-lib + "/src/trivial.nix"); # Patch sources directly
sources = builtins.mapAttrs (patch.base { pkgs = import sources'.nixos-unstable { }; })
.applyPatches' sources';
patch = import sources.nix-patches { patchFile = ./patches; }; nix-lib = import ./lib/nix-lib;
patch = import ./lib/nix-patches { patchFile = ./patches; };
nodes' = import ./meta/nodes.nix; nodes' = import ./meta/nodes.nix;
nodes = builtins.attrNames nodes'; nodes = builtins.attrNames nodes';
mkNode = node: { mkNode = node: {
# Import the base configuration for each node # Import the base configuration for each node
imports = builtins.map (lib.mkRel (./machines/${node})) [ imports = [ ./machines/${node}/_configuration.nix ];
"_configuration.nix"
"_hardware-configuration.nix"
];
}; };
nixpkgs' = import ./meta/nixpkgs.nix; nixpkgs' = import ./meta/nixpkgs.nix;
# All supported nixpkgs versions, instanciated # All supported nixpkgs versions, instanciated
nixpkgs = lib.mapSingleFuse mkNixpkgs nixpkgs'.supported; nixpkgs = nix-lib.mapSingleFuse mkNixpkgs nixpkgs'.supported;
# Get the configured nixos version for the node, # Get the configured nixos version for the node,
# defaulting to the one defined in meta/nixpkgs # defaulting to the one defined in meta/nixpkgs
@ -27,12 +28,9 @@ let
# Builds a patched version of nixpkgs, only as the source # Builds a patched version of nixpkgs, only as the source
mkNixpkgs' = mkNixpkgs' =
v: v:
let patch.mkNixpkgsSrc rec {
version = "nixos-${v}"; src = sources'.${name};
in name = "nixos-${v}";
patch.mkNixpkgsSrc {
src = sources.${version};
inherit version;
}; };
# Instanciates the required nixpkgs version # Instanciates the required nixpkgs version
@ -42,10 +40,8 @@ let
# Function to create arguments based on the node # Function to create arguments based on the node
# #
mkArgs = node: rec { mkArgs = node: rec {
lib = import sources.nix-lib { lib = nixpkgs.${version node}.lib // {
inherit (nixpkgs.${version node}) lib; extra = nix-lib;
keysRoot = ./keys;
}; };
meta = (import ./meta) lib; meta = (import ./meta) lib;
@ -56,13 +52,15 @@ in
{ {
meta = { meta = {
nodeNixpkgs = lib.mapSingleFuse (n: nixpkgs.${version n}) nodes; nodeNixpkgs = nix-lib.mapSingleFuse (n: nixpkgs.${version n}) nodes;
specialArgs = { specialArgs = {
inherit nixpkgs sources; inherit nixpkgs sources;
dgn-keys = import ./keys;
}; };
nodeSpecialArgs = lib.mapSingleFuse mkArgs nodes; nodeSpecialArgs = nix-lib.mapSingleFuse mkArgs nodes;
}; };
defaults = defaults =
@ -112,4 +110,4 @@ in
}; };
}; };
} }
// (lib.mapSingleFuse mkNode nodes) // (nix-lib.mapSingleFuse mkNode nodes)

View file

@ -1,7 +1,7 @@
{ lib, pkgs, ... }: { lib, pkgs, ... }:
let let
dgn-lib = import ../lib { }; dgn-keys = import ../keys;
dgn-members = (import ../meta lib).organization.groups.root; dgn-members = (import ../meta lib).organization.groups.root;
in in
@ -34,7 +34,5 @@ in
openssh.enable = true; openssh.enable = true;
}; };
users.users.root.openssh.authorizedKeys.keyFiles = builtins.map ( users.users.root.openssh.authorizedKeys.keys = dgn-keys.getKeys dgn-members;
m: dgn-lib.mkRel ../keys "${m}.keys"
) dgn-members;
} }

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAA16foz+XzwKwyIR4wFgNIAE3Y7AfXyEsUZFVVz8Rie catvayor@katvayor

80
keys/default.nix Normal file
View file

@ -0,0 +1,80 @@
let
_sources = import ../npins;
meta = import ../meta (import _sources.nixpkgs { }).lib;
getAttr = flip builtins.getAttr;
inherit (import ../lib/nix-lib) flip setDefault unique;
in
rec {
# WARNING: When updating this list, make sure that the nodes and members are alphabetically sorted
# If not, you will face an angry maintainer
_keys = {
# SSH keys of the nodes
bridge01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP5bS3iBXz8wycBnTvI5Qi79WLu0h4IVv/EOdKYbP5y7" ];
compute01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/YluSVS+4h3oV8CIUj0OmquyJXju8aEQy0Jz210vTu" ];
geo01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEl6Pubbau+usQkemymoSKrTBbrX8JU5m5qpZbhNx8p4" ];
geo02 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFNXaCS0/Nsu5npqQk1TP6wMHCVIOaj4pblp2tIg6Ket" ];
krz01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP4o65gWOgNrxbSd3kiQIGZUM+YD6kuZOQtblvzUGsfB" ];
rescue01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJa02Annu8o7ggPjTH/9ttotdNGyghlWfU9E8pnuLUf" ];
storage01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA0s+rPcEcfWCqZ4B2oJiWT/60awOI8ijL1rtDM2glXZ" ];
vault01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAJA6VA7LENvTRlKdcrqt8DxDOPvX3bg3Gjy9mNkdFEW" ];
web01 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPR+lewuJ/zhCyizJGJOH1UaAB699ItNKEaeuoK57LY5" ];
web02 = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID+QDE+GgZs6zONHvzRW15BzGJNW69k2BFZgB/Zh/tLX" ];
# SSH keys of the DGNum members
catvayor = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAA16foz+XzwKwyIR4wFgNIAE3Y7AfXyEsUZFVVz8Rie catvayor@katvayor"
];
ecoppens = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGmU7yEOCGuGNt4PlQbzd0Cms1RePpo8yEA7Ij/+TdA" ];
gdd = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICE7TN5NQKGojNGIeTFiHjLHTDQGT8i05JFqX/zLW2zc"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFbkPWWZzOBaRdx4+7xQUgxDwuncSl2fxAeVuYfVUPZ"
];
jemagius = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOoxmou5OU74GgpIUkhVt6GiB+O9Jy4ge0TwK5MDFJ2F"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxQX0JLRah3GfIOkua4ZhEJhp5Ykv55RO0SPrSUwCBs5arnALg8gq12YLr09t4bzW/NA9/jn7flhh4S54l4RwBUhmV4JSQhGu71KGhfOj5ZBkDoSyYqzbu206DfZP5eQonSmjfP6XghcWOr/jlBzw9YAAQkFxsQgXEkr4kdn0ZXfZGz6b0t3YUjYIuDNbptFsGz2V9iQVy1vnxrjnLSfc25j4et8z729Vpy4M7oCaE6a6hgon4V1jhVbg43NAE5gu2eYFAPIzO3E7ZI8WjyLu1wtOBClk1f+HMen3Tr+SX2PXmpPGb+I2fAkbzu/C4X/M3+2bL1dYjxuvQhvvpAjxFwmdoXW4gWJ3J/FRiFrKsiAY0rYC+yi8SfacJWCv4EEcV/yQ4gYwpmU9xImLaro6w5cOHGCqrzYqjZc4Wi6AWFGeBSNzNs9PXLgMRWeUyiIDOFnSep2ebZeVjTB16m+o/YDEhE10uX9kCCx3Dy/41iJ1ps7V4JWGFsr0Fqaz8mu8="
];
luj = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDMBW7rTtfZL9wtrpCVgariKdpN60/VeAzXkh9w3MwbO julien@enigma"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGa+7n7kNzb86pTqaMn554KiPrkHRGeTJ0asY1NjSbpr julien@tower"
];
mdebray = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEpwF+XD3HgX64kqD42pcEZRNYAWoO4YNiOm5KO4tH6o maurice@polaris"
];
raito = [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcEkYM1r8QVNM/G5CxJInEdoBCWjEHHDdHlzDYNSUIdHHsn04QY+XI67AdMCm8w30GZnLUIj5RiJEWXREUApby0GrfxGGcy8otforygfgtmuUKAUEHdU2MMwrQI7RtTZ8oQ0USRGuqvmegxz3l5caVU7qGvBllJ4NUHXrkZSja2/51vq80RF4MKkDGiz7xUTixI2UcBwQBCA/kQedKV9G28EH+1XfvePqmMivZjl+7VyHsgUVj9eRGA1XWFw59UPZG8a7VkxO/Eb3K9NF297HUAcFMcbY6cPFi9AaBgu3VC4eetDnoN/+xT1owiHi7BReQhGAy/6cdf7C/my5ehZwD"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE0xMwWedkKosax9+7D2OlnMxFL/eV4CvFZLsbLptpXr"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKiXXYkhRh+s7ixZ8rvG8ntIqd6FELQ9hh7HoaHQJRPU"
];
thubrecht = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL+EZXYziiaynJX99EW8KesnmRTZMof3BoIs3mdEl8L3"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHL4M4HKjs4cjRAYRk9pmmI8U0R4+T/jQh6Fxp/i1Eoy"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM1jpXR7BWQa7Sed7ii3SbvIPRRlKb3G91qC0vOwfJn"
];
};
getKeys = ls: builtins.concatLists (builtins.map (getAttr _keys) ls);
mkSecrets =
nodes: setDefault { publicKeys = unique (rootKeys ++ (builtins.concatMap getNodeKeys' nodes)); };
getNodeKeys' =
node:
let
names = builtins.foldl' (names: group: names ++ meta.organization.groups.${group}) (
meta.nodes.${node}.admins ++ [ node ]
) meta.nodes.${node}.adminGroups;
in
unique (getKeys names);
getNodeKeys = node: rootKeys ++ getNodeKeys' node;
# List of keys for the root group
rootKeys = getKeys meta.organization.groups.root;
# List of 'machine' keys
machineKeys = rootKeys ++ (getKeys (builtins.attrNames meta.nodes));
}

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGmU7yEOCGuGNt4PlQbzd0Cms1RePpo8yEA7Ij/+TdA

View file

@ -1,2 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICE7TN5NQKGojNGIeTFiHjLHTDQGT8i05JFqX/zLW2zc
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFbkPWWZzOBaRdx4+7xQUgxDwuncSl2fxAeVuYfVUPZ

View file

@ -1,2 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOoxmou5OU74GgpIUkhVt6GiB+O9Jy4ge0TwK5MDFJ2F
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxQX0JLRah3GfIOkua4ZhEJhp5Ykv55RO0SPrSUwCBs5arnALg8gq12YLr09t4bzW/NA9/jn7flhh4S54l4RwBUhmV4JSQhGu71KGhfOj5ZBkDoSyYqzbu206DfZP5eQonSmjfP6XghcWOr/jlBzw9YAAQkFxsQgXEkr4kdn0ZXfZGz6b0t3YUjYIuDNbptFsGz2V9iQVy1vnxrjnLSfc25j4et8z729Vpy4M7oCaE6a6hgon4V1jhVbg43NAE5gu2eYFAPIzO3E7ZI8WjyLu1wtOBClk1f+HMen3Tr+SX2PXmpPGb+I2fAkbzu/C4X/M3+2bL1dYjxuvQhvvpAjxFwmdoXW4gWJ3J/FRiFrKsiAY0rYC+yi8SfacJWCv4EEcV/yQ4gYwpmU9xImLaro6w5cOHGCqrzYqjZc4Wi6AWFGeBSNzNs9PXLgMRWeUyiIDOFnSep2ebZeVjTB16m+o/YDEhE10uX9kCCx3Dy/41iJ1ps7V4JWGFsr0Fqaz8mu8=

View file

@ -1,2 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDMBW7rTtfZL9wtrpCVgariKdpN60/VeAzXkh9w3MwbO julien@enigma
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGa+7n7kNzb86pTqaMn554KiPrkHRGeTJ0asY1NjSbpr julien@tower

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP5bS3iBXz8wycBnTvI5Qi79WLu0h4IVv/EOdKYbP5y7

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/YluSVS+4h3oV8CIUj0OmquyJXju8aEQy0Jz210vTu

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEl6Pubbau+usQkemymoSKrTBbrX8JU5m5qpZbhNx8p4

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFNXaCS0/Nsu5npqQk1TP6wMHCVIOaj4pblp2tIg6Ket

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJa02Annu8o7ggPjTH/9ttotdNGyghlWfU9E8pnuLUf

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA0s+rPcEcfWCqZ4B2oJiWT/60awOI8ijL1rtDM2glXZ

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAJA6VA7LENvTRlKdcrqt8DxDOPvX3bg3Gjy9mNkdFEW

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPR+lewuJ/zhCyizJGJOH1UaAB699ItNKEaeuoK57LY5

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID+QDE+GgZs6zONHvzRW15BzGJNW69k2BFZgB/Zh/tLX

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEpwF+XD3HgX64kqD42pcEZRNYAWoO4YNiOm5KO4tH6o maurice@polaris

View file

@ -1,3 +0,0 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcEkYM1r8QVNM/G5CxJInEdoBCWjEHHDdHlzDYNSUIdHHsn04QY+XI67AdMCm8w30GZnLUIj5RiJEWXREUApby0GrfxGGcy8otforygfgtmuUKAUEHdU2MMwrQI7RtTZ8oQ0USRGuqvmegxz3l5caVU7qGvBllJ4NUHXrkZSja2/51vq80RF4MKkDGiz7xUTixI2UcBwQBCA/kQedKV9G28EH+1XfvePqmMivZjl+7VyHsgUVj9eRGA1XWFw59UPZG8a7VkxO/Eb3K9NF297HUAcFMcbY6cPFi9AaBgu3VC4eetDnoN/+xT1owiHi7BReQhGAy/6cdf7C/my5ehZwD
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE0xMwWedkKosax9+7D2OlnMxFL/eV4CvFZLsbLptpXr
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKiXXYkhRh+s7ixZ8rvG8ntIqd6FELQ9hh7HoaHQJRPU

View file

@ -1,3 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL+EZXYziiaynJX99EW8KesnmRTZMof3BoIs3mdEl8L3
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHL4M4HKjs4cjRAYRk9pmmI8U0R4+T/jQh6Fxp/i1Eoy
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM1jpXR7BWQa7Sed7ii3SbvIPRRlKb3G91qC0vOwfJn

View file

@ -1,33 +0,0 @@
_:
let
sources = import ../npins;
lib = import sources.nix-lib {
inherit ((import sources.nixpkgs { })) lib;
keysRoot = ../keys;
};
meta = import ../meta lib;
inherit (lib.extra) getAllKeys;
in
lib.extra
// rec {
# Get publickeys associated to a node
getNodeKeys =
node:
let
names = builtins.foldl' (names: group: names ++ meta.organization.groups.${group}) (
meta.nodes.${node}.admins ++ [ "/machines/${node}" ]
) meta.nodes.${node}.adminGroups;
in
rootKeys ++ (getAllKeys names);
rootKeys = getAllKeys meta.organization.groups.root;
machineKeys =
rootKeys ++ (getAllKeys (builtins.map (n: "machines/${n}") (builtins.attrNames meta.nodes)));
}

197
lib/nix-lib/default.nix Normal file
View file

@ -0,0 +1,197 @@
# Copyright Tom Hubrecht, (2023)
#
# Tom Hubrecht <tom@hubrecht.ovh>
#
# This software is governed by the CeCILL license under French law and
# abiding by the rules of distribution of free software. You can use,
# modify and/ or redistribute the software under the terms of the CeCILL
# license as circulated by CEA, CNRS and INRIA at the following URL
# "http://www.cecill.info".
#
# As a counterpart to the access to the source code and rights to copy,
# modify and redistribute granted by the license, users are provided only
# with a limited warranty and the software's author, the holder of the
# economic rights, and the successive licensors have only limited
# liability.
#
# In this respect, the user's attention is drawn to the risks associated
# with loading, using, modifying and/or developing or reproducing the
# software by the user in light of its specific status of free software,
# that may mean that it is complicated to manipulate, and that also
# therefore means that it is reserved for developers and experienced
# professionals having in-depth computer knowledge. Users are therefore
# encouraged to load and test the software's suitability as regards their
# requirements in conditions enabling the security of their systems and/or
# data to be ensured and, more generally, to use and operate it in the
# same conditions as regards security.
#
# The fact that you are presently reading this means that you have had
# knowledge of the CeCILL license and that you accept its terms.
let
# Reimplement optional functions
_optional =
default: b: value:
if b then value else default;
in
rec {
inherit (import ./nixpkgs.nix)
flip
hasPrefix
recursiveUpdate
splitString
unique
;
/*
Fuses a list of attribute sets into a single attribute set.
Type: [attrs] -> attrs
Example:
x = [ { a = 1; } { b = 2; } ]
fuseAttrs x
=> { a = 1; b = 2; }
*/
fuseAttrs = builtins.foldl' (attrs: x: attrs // x) { };
fuseValueAttrs = attrs: fuseAttrs (builtins.attrValues attrs);
/*
Applies a function to `attrsList` before fusing the resulting list
of attribute sets.
Type: ('a -> attrs) -> ['a] -> attrs
Example:
x = [ "to" "ta" "ti" ]
f = s: { ${s} = s + s; }
mapFuse f x
=> { to = "toto"; ta = "tata"; ti = "titi"; }
*/
mapFuse =
# 'a -> attrs
f:
# ['a]
attrsList:
fuseAttrs (builtins.map f attrsList);
/*
Equivalent of lib.singleton but for an attribute set.
Type: str -> 'a -> attrs
Example:
singleAttr "a" 1
=> { a = 1; }
*/
singleAttr = name: value: { ${name} = value; };
# Enables a list of modules.
enableAttrs' =
enable:
mapFuse (m: {
${m}.${enable} = true;
});
enableModules = enableAttrs' "enable";
/*
Create an attribute set from a list of values, mapping those
values through the function `f`.
Example:
mapSingleFuse (x: "val-${x}") [ "a" "b" ]
=> { a = "val-a"; b = "val-b" }
*/
mapSingleFuse = f: mapFuse (x: singleAttr x (f x));
/*
Creates a relative path as a string
Type: path -> str -> path
Example:
mkRel /home/test/ "file.txt"
=> "/home/test/file.txt"
*/
mkRel = path: file: path + "/${file}";
setDefault =
default:
mapFuse (name: {
${name} = default;
});
mkBaseSecrets =
root:
mapFuse (secret: {
${secret}.file = mkRel root secret;
});
getSecrets = dir: builtins.attrNames (import (mkRel dir "secrets.nix"));
subAttr = attrs: name: attrs.${name};
subAttrs = attrs: builtins.map (subAttr attrs);
optionalList = _optional [ ];
optionalAttrs = _optional { };
optionalString = _optional "";
/*
Same as fuseAttrs but using `lib.recursiveUpdate` to merge attribute
sets together.
Type: [attrs] -> attrs
*/
recursiveFuse = builtins.foldl' recursiveUpdate { };
mkImport =
root: file:
let
path = mkRel root file;
in
path + (optionalString (!(builtins.pathExists path)) ".nix");
mkImports = root: builtins.map (mkImport root);
/*
Creates a confugiration by merging enabled modules,
services and extraConfig.
Example:
mkConfig {
enabledModules = [ "ht-defaults" ];
enabledServices = [ "toto" ];
extraConfig = { services.nginx.enable = true; };
root = ./.;
}
=>
{
imports = [ ./toto ];
ht-defaults.enable = true;
services.nginx.enable = true;
}
*/
mkConfig =
{
# List of modules to enable with `enableModules`
enabledModules,
# List of services to import
enabledServices,
# Extra configuration, defaults to `{ }`
extraConfig ? { },
# Path relative to which the enabled services will be imported
root,
}:
recursiveFuse [
(enableModules enabledModules)
{ imports = mkImports root ([ "_hardware-configuration" ] ++ enabledServices); }
extraConfig
];
}

416
lib/nix-lib/nixpkgs.nix Normal file
View file

@ -0,0 +1,416 @@
###
# Collection of nixpkgs library functions, those are necessary for defining our own lib
#
# They have been simplified and builtins are used in some places, instead of lib shims.
rec {
/**
Does the same as the update operator '//' except that attributes are
merged until the given predicate is verified. The predicate should
accept 3 arguments which are the path to reach the attribute, a part of
the first attribute set and a part of the second attribute set. When
the predicate is satisfied, the value of the first attribute set is
replaced by the value of the second attribute set.
# Inputs
`pred`
: Predicate, taking the path to the current attribute as a list of strings for attribute names, and the two values at that path from the original arguments.
`lhs`
: Left attribute set of the merge.
`rhs`
: Right attribute set of the merge.
# Type
```
recursiveUpdateUntil :: ( [ String ] -> AttrSet -> AttrSet -> Bool ) -> AttrSet -> AttrSet -> AttrSet
```
# Examples
:::{.example}
## `lib.attrsets.recursiveUpdateUntil` usage example
```nix
recursiveUpdateUntil (path: l: r: path == ["foo"]) {
# first attribute set
foo.bar = 1;
foo.baz = 2;
bar = 3;
} {
#second attribute set
foo.bar = 1;
foo.quz = 2;
baz = 4;
}
=> {
foo.bar = 1; # 'foo.*' from the second set
foo.quz = 2; #
bar = 3; # 'bar' from the first set
baz = 4; # 'baz' from the second set
}
```
:::
*/
recursiveUpdateUntil =
pred: lhs: rhs:
let
f =
attrPath:
builtins.zipAttrsWith (
n: values:
let
here = attrPath ++ [ n ];
in
if builtins.length values == 1 || pred here (builtins.elemAt values 1) (builtins.head values) then
builtins.head values
else
f here values
);
in
f [ ] [
rhs
lhs
];
/**
A recursive variant of the update operator //. The recursion
stops when one of the attribute values is not an attribute set,
in which case the right hand side value takes precedence over the
left hand side value.
# Inputs
`lhs`
: Left attribute set of the merge.
`rhs`
: Right attribute set of the merge.
# Type
```
recursiveUpdate :: AttrSet -> AttrSet -> AttrSet
```
# Examples
:::{.example}
## `lib.attrsets.recursiveUpdate` usage example
```nix
recursiveUpdate {
boot.loader.grub.enable = true;
boot.loader.grub.device = "/dev/hda";
} {
boot.loader.grub.device = "";
}
returns: {
boot.loader.grub.enable = true;
boot.loader.grub.device = "";
}
```
:::
*/
recursiveUpdate =
lhs: rhs:
recursiveUpdateUntil (
_: lhs: rhs:
!(builtins.isAttrs lhs && builtins.isAttrs rhs)
) lhs rhs;
/**
Determine whether a string has given prefix.
# Inputs
`pref`
: Prefix to check for
`str`
: Input string
# Type
```
hasPrefix :: string -> string -> bool
```
# Examples
:::{.example}
## `lib.strings.hasPrefix` usage example
```nix
hasPrefix "foo" "foobar"
=> true
hasPrefix "foo" "barfoo"
=> false
```
:::
*/
hasPrefix = pref: str: (builtins.substring 0 (builtins.stringLength pref) str == pref);
/**
Escape occurrence of the elements of `list` in `string` by
prefixing it with a backslash.
# Inputs
`list`
: 1\. Function argument
`string`
: 2\. Function argument
# Type
```
escape :: [string] -> string -> string
```
# Examples
:::{.example}
## `lib.strings.escape` usage example
```nix
escape ["(" ")"] "(foo)"
=> "\\(foo\\)"
```
:::
*/
escape = list: builtins.replaceStrings list (builtins.map (c: "\\${c}") list);
/**
Convert a string `s` to a list of characters (i.e. singleton strings).
This allows you to, e.g., map a function over each character. However,
note that this will likely be horribly inefficient; Nix is not a
general purpose programming language. Complex string manipulations
should, if appropriate, be done in a derivation.
Also note that Nix treats strings as a list of bytes and thus doesn't
handle unicode.
# Inputs
`s`
: 1\. Function argument
# Type
```
stringToCharacters :: string -> [string]
```
# Examples
:::{.example}
## `lib.strings.stringToCharacters` usage example
```nix
stringToCharacters ""
=> [ ]
stringToCharacters "abc"
=> [ "a" "b" "c" ]
stringToCharacters "🦄"
=> [ "<EFBFBD>" "<EFBFBD>" "<EFBFBD>" "<EFBFBD>" ]
```
:::
*/
stringToCharacters = s: builtins.genList (p: builtins.substring p 1 s) (builtins.stringLength s);
/**
Turn a string `s` into an exact regular expression
# Inputs
`s`
: 1\. Function argument
# Type
```
escapeRegex :: string -> string
```
# Examples
:::{.example}
## `lib.strings.escapeRegex` usage example
```nix
escapeRegex "[^a-z]*"
=> "\\[\\^a-z]\\*"
```
:::
*/
escapeRegex = escape (stringToCharacters "\\[{()^$?*+|.");
/**
Appends string context from string like object `src` to `target`.
:::{.warning}
This is an implementation
detail of Nix and should be used carefully.
:::
Strings in Nix carry an invisible `context` which is a list of strings
representing store paths. If the string is later used in a derivation
attribute, the derivation will properly populate the inputDrvs and
inputSrcs.
# Inputs
`src`
: The string to take the context from. If the argument is not a string,
it will be implicitly converted to a string.
`target`
: The string to append the context to. If the argument is not a string,
it will be implicitly converted to a string.
# Type
```
addContextFrom :: string -> string -> string
```
# Examples
:::{.example}
## `lib.strings.addContextFrom` usage example
```nix
pkgs = import <nixpkgs> { };
addContextFrom pkgs.coreutils "bar"
=> "bar"
```
The context can be displayed using the `toString` function:
```nix
nix-repl> builtins.getContext (lib.strings.addContextFrom pkgs.coreutils "bar")
{
"/nix/store/m1s1d2dk2dqqlw3j90jl3cjy2cykbdxz-coreutils-9.5.drv" = { ... };
}
```
:::
*/
addContextFrom = src: target: builtins.substring 0 0 src + target;
/**
Cut a string with a separator and produces a list of strings which
were separated by this separator.
# Inputs
`sep`
: 1\. Function argument
`s`
: 2\. Function argument
# Type
```
splitString :: string -> string -> [string]
```
# Examples
:::{.example}
## `lib.strings.splitString` usage example
```nix
splitString "." "foo.bar.baz"
=> [ "foo" "bar" "baz" ]
splitString "/" "/usr/local/bin"
=> [ "" "usr" "local" "bin" ]
```
:::
*/
splitString =
sep: s:
let
splits = builtins.filter builtins.isString (
builtins.split (escapeRegex (builtins.toString sep)) (builtins.toString s)
);
in
builtins.map (addContextFrom s) splits;
/**
Remove duplicate elements from the `list`. O(n^2) complexity.
# Inputs
`list`
: Input list
# Type
```
unique :: [a] -> [a]
```
# Examples
:::{.example}
## `lib.lists.unique` usage example
```nix
unique [ 3 2 3 4 ]
=> [ 3 2 4 ]
```
:::
*/
unique = builtins.foldl' (acc: e: if builtins.elem e acc then acc else acc ++ [ e ]) [ ];
/**
Flip the order of the arguments of a binary function.
# Inputs
`f`
: 1\. Function argument
`a`
: 2\. Function argument
`b`
: 3\. Function argument
# Type
```
flip :: (a -> b -> c) -> (b -> a -> c)
```
# Examples
:::{.example}
## `lib.trivial.flip` usage example
```nix
flip concat [1] [2]
=> [ 2 1 ]
```
:::
*/
flip =
f: a: b:
f b a;
}

110
lib/nix-patches/default.nix Normal file
View file

@ -0,0 +1,110 @@
# Copyright Tom Hubrecht, (2023-2024)
#
# Tom Hubrecht <tom@hubrecht.ovh>
#
# This software is governed by the CeCILL license under French law and
# abiding by the rules of distribution of free software. You can use,
# modify and/ or redistribute the software under the terms of the CeCILL
# license as circulated by CEA, CNRS and INRIA at the following URL
# "http://www.cecill.info".
#
# As a counterpart to the access to the source code and rights to copy,
# modify and redistribute granted by the license, users are provided only
# with a limited warranty and the software's author, the holder of the
# economic rights, and the successive licensors have only limited
# liability.
#
# In this respect, the user's attention is drawn to the risks associated
# with loading, using, modifying and/or developing or reproducing the
# software by the user in light of its specific status of free software,
# that may mean that it is complicated to manipulate, and that also
# therefore means that it is reserved for developers and experienced
# professionals having in-depth computer knowledge. Users are therefore
# encouraged to load and test the software's suitability as regards their
# requirements in conditions enabling the security of their systems and/or
# data to be ensured and, more generally, to use and operate it in the
# same conditions as regards security.
#
# The fact that you are presently reading this means that you have had
# knowledge of the CeCILL license and that you accept its terms.
{
patchFile,
excludeGitHubManual ? true,
fetchers ? { },
}:
rec {
base =
{ pkgs }:
rec {
mkUrlPatch =
attrs:
pkgs.fetchpatch (
{
hash = pkgs.lib.fakeHash;
}
// attrs
// (pkgs.lib.optionalAttrs (excludeGitHubManual && !(builtins.hasAttr "includes" attrs)) {
excludes = (attrs.excludes or [ ]) ++ [ "nixos/doc/manual/*" ];
})
);
mkGitHubPatch =
{ id, ... }@attrs:
mkUrlPatch (
(builtins.removeAttrs attrs [ "id" ])
// {
url = "https://github.com/NixOS/nixpkgs/pull/${builtins.toString id}.diff";
}
);
mkCommitPatch =
{ sha, ... }@attrs:
mkUrlPatch (
(builtins.removeAttrs attrs [ "sha" ])
// {
url = "https://github.com/NixOS/nixpkgs/commit/${builtins.toString sha}.diff";
}
);
patchFunctions = {
commit = mkCommitPatch;
github = mkGitHubPatch;
remote = pkgs.fetchpatch;
static = attrs: attrs.path;
url = mkUrlPatch;
} // fetchers;
mkPatch =
{
_type ? "github",
...
}@attrs:
if builtins.hasAttr _type patchFunctions then
patchFunctions.${_type} (builtins.removeAttrs attrs [ "_type" ])
else
throw "Unknown patch type: ${builtins.toString _type}.";
mkPatches = v: builtins.map mkPatch ((import patchFile).${v} or [ ]);
applyPatches =
{
src,
name,
patches ? mkPatches name,
}:
if patches == [ ] then
src
else
pkgs.applyPatches {
inherit patches src;
name = "${name}-patched";
};
applyPatches' = name: src: applyPatches { inherit name src; };
};
mkNixpkgsSrc = { src, name }: (base { pkgs = import src { }; }).applyPatches { inherit src name; };
}

View file

@ -1,5 +1,3 @@
let (import ../../../keys).mkSecrets [ "bridg01" ] [
lib = import ../../../lib { }; # List of secrets for bridge01
in ]
lib.setDefault { publicKeys = lib.getNodeKeys "bridge01"; } [ ]

View file

@ -21,6 +21,7 @@ lib.extra.mkConfig {
"librenms" "librenms"
"mastodon" "mastodon"
"nextcloud" "nextcloud"
"ollama-proxy"
"outline" "outline"
"plausible" "plausible"
"postgresql" "postgresql"

View file

@ -8,7 +8,7 @@
}: }:
let let
inherit (lib) mapAttrsToList; inherit (lib) toLower;
python = python =
let let
@ -33,25 +33,29 @@ let
}; };
}; };
pythonEnv = python.withPackages (ps: [ pythonEnv = python.withPackages (
ps.django ps:
ps.gunicorn [
ps.psycopg ps.django
ps.django-compressor ps.gunicorn
ps.django-import-export ps.psycopg
ps.django-compressor
ps.django-import-export
# Local packages # Local packages
ps.django-allauth ps.django-allauth
ps.django-allauth-cas ps.django-allauth-cas
ps.django-browser-reload ps.django-browser-reload
ps.django-bulma-forms ps.django-bulma-forms
ps.django-sass-processor ps.django-sass-processor
ps.django-sass-processor-dart-sass ps.django-sass-processor-dart-sass
ps.django-unfold ps.django-unfold
ps.loadcredential ps.loadcredential
ps.pykanidm ps.pykanidm
ps.python-cas ps.python-cas
]); ]
++ ps.django-allauth.optional-dependencies.saml
);
staticDrv = pkgs.stdenv.mkDerivation { staticDrv = pkgs.stdenv.mkDerivation {
name = "dgsi-static"; name = "dgsi-static";
@ -67,8 +71,10 @@ let
configurePhase = '' configurePhase = ''
export DGSI_STATIC_ROOT=$out/static export DGSI_STATIC_ROOT=$out/static
export CREDENTIALS_DIRECTORY=$(pwd)/../.credentials export CREDENTIALS_DIRECTORY=$(pwd)/../.credentials
export DGSI_KANIDM_CLIENT="dgsi_test"; export DGSI_KANIDM_CLIENT="dgsi_test"
export DGSI_KANIDM_AUTH_TOKEN="fake.token"; export DGSI_KANIDM_AUTH_TOKEN="fake.token"
export DGSI_X509_KEY=""
export DGSI_X509_CERT=""
''; '';
doBuild = false; doBuild = false;
@ -101,12 +107,14 @@ in
serviceConfig = { serviceConfig = {
DynamicUser = true; DynamicUser = true;
LoadCredential = mapAttrsToList (name: value: "${name}:${value}") { LoadCredential = map (name: "${name}:${config.age.secrets."dgsi-${toLower name}_file".path}") [
SECRET_KEY = config.age.secrets."dgsi-secret_key_file".path; "EMAIL_HOST_PASSWORD"
KANIDM_AUTH_TOKEN = config.age.secrets."dgsi-kanidm_auth_token_file".path; "KANIDM_AUTH_TOKEN"
KANIDM_SECRET = config.age.secrets."dgsi-kanidm_secret_file".path; "KANIDM_SECRET"
EMAIL_HOST_PASSWORD = config.age.secrets."dgsi-email_host_password_file".path; "SECRET_KEY"
}; "X509_CERT"
"X509_KEY"
];
RuntimeDirectory = "django-apps/dgsi"; RuntimeDirectory = "django-apps/dgsi";
StateDirectory = "django-apps/dgsi"; StateDirectory = "django-apps/dgsi";
UMask = "0027"; UMask = "0027";

View file

@ -1,9 +1,4 @@
let (import ../../../../keys).mkSecrets [ "compute01" ] [
lib = import ../../../../lib { };
publicKeys = lib.getNodeKeys "compute01";
in
lib.setDefault { inherit publicKeys; } [
"kanidm-password_admin" "kanidm-password_admin"
"kanidm-password_idm_admin" "kanidm-password_idm_admin"
] ]

View file

@ -0,0 +1,27 @@
{
pkgs,
nodes,
meta,
...
}:
{
services.nginx = {
enable = true;
recommendedProxySettings = true;
virtualHosts."ollama01.beta.dgnum.eu" = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://${meta.network.krz01.netbirdIp}:${toString nodes.krz01.config.services.ollama.port}";
basicAuthFile = pkgs.writeText "ollama-htpasswd" ''
raito:$y$j9T$UDEHpLtM52hRGK0I4qT6M0$N75AhENLqgtJnTGaPzq51imhjZvuPr.ow81Co1ZTcX2
'';
};
};
};
networking.firewall.allowedTCPPorts = [
80
443
];
}

Binary file not shown.

Binary file not shown.

View file

@ -1,15 +1,13 @@
let (import ../../../keys).mkSecrets [ "compute01" ] [
lib = import ../../../lib { }; # List of secrets for compute01
publicKeys = lib.getNodeKeys "compute01";
in
lib.setDefault { inherit publicKeys; } [
"arkheon-env_file" "arkheon-env_file"
"bupstash-put_key" "bupstash-put_key"
"dgsi-email_host_password_file" "dgsi-email_host_password_file"
"dgsi-kanidm_auth_token_file" "dgsi-kanidm_auth_token_file"
"dgsi-kanidm_secret_file" "dgsi-kanidm_secret_file"
"dgsi-secret_key_file" "dgsi-secret_key_file"
"dgsi-x509_cert_file"
"dgsi-x509_key_file"
"ds-fr-secret_file" "ds-fr-secret_file"
"grafana-oauth_client_secret_file" "grafana-oauth_client_secret_file"
"grafana-smtp_password_file" "grafana-smtp_password_file"

View file

@ -1,7 +1,16 @@
{ nixpkgs, ... }: { nixpkgs, ... }:
let let
dgn-id = "f756a0f47e704db815a7af6786f6eb0aec628d6b"; ###
# How to update:
# - clone https://git.dgnum.eu/DGNum/Stirling-PDF
# - switch to the branch dgn-v0.X.Y where X.Y is the version in production
# - fetch upstream changes up to the tagged release in nixos-unstable
# - rebase onto the upstream branch, so that the last commit is "feat: Add DGNum customization"
# - push to a new branch dgn-v0.A.B where A.B is the new version
# - finally, update the commit hash of the customization patch
dgn-id = "8f19cb1c9623f8da71f6512c1528d83acc35db57";
in in
{ {

View file

@ -1,5 +1,3 @@
let (import ../../../keys).mkSecrets [ "geo01" ] [
lib = import ../../../lib { }; # List of secrets for geo01
publicKeys = lib.getNodeKeys "geo01"; ]
in
lib.setDefault { inherit publicKeys; } [ ]

View file

@ -1,5 +1,3 @@
let (import ../../../keys).mkSecrets [ "geo02" ] [
lib = import ../../../lib { }; # List of secrets for geo02
publicKeys = lib.getNodeKeys "geo02"; ]
in
lib.setDefault { inherit publicKeys; } [ ]

View file

@ -0,0 +1,179 @@
From 2abd226ff3093c5a9e18a618fba466853e7ebaf7 Mon Sep 17 00:00:00 2001
From: Raito Bezarius <masterancpp@gmail.com>
Date: Tue, 8 Oct 2024 18:27:41 +0200
Subject: [PATCH] K80 support
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
---
docs/development.md | 6 +++-
docs/gpu.md | 1 +
gpu/amd_linux.go | 6 +++-
gpu/gpu.go | 63 ++++++++++++++++++++++++++++++++++++-----
scripts/build_docker.sh | 2 +-
scripts/build_linux.sh | 2 +-
6 files changed, 69 insertions(+), 11 deletions(-)
diff --git a/docs/development.md b/docs/development.md
index 2f7b9ecf..9da35931 100644
--- a/docs/development.md
+++ b/docs/development.md
@@ -51,7 +51,11 @@ Typically the build scripts will auto-detect CUDA, however, if your Linux distro
or installation approach uses unusual paths, you can specify the location by
specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
libraries, and `CUDACXX` to the location of the nvcc compiler. You can customize
-a set of target CUDA architectures by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "50;60;70")
+a set of target CUDA architectures by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "35;37;50;60;70")
+
+To support GPUs older than Compute Capability 5.0, you will need to use an older version of
+the Driver from [Unix Driver Archive](https://www.nvidia.com/en-us/drivers/unix/) (tested with 470) and [CUDA Toolkit Archive](https://developer.nvidia.com/cuda-toolkit-archive) (tested with cuda V11). When you build Ollama, you will need to set two environment variable to adjust the minimum compute capability Ollama supports via `export GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/gpu.CudaComputeMajorMin=3\" \"-X=github.com/ollama/ollama/gpu.CudaComputeMinorMin=5\"'"` and the `CMAKE_CUDA_ARCHITECTURES`. To find the Compute Capability of your older GPU, refer to [GPU Compute Capability](https://developer.nvidia.com/cuda-gpus).
+
Then generate dependencies:
diff --git a/docs/gpu.md b/docs/gpu.md
index a6b559f0..66627611 100644
--- a/docs/gpu.md
+++ b/docs/gpu.md
@@ -28,6 +28,7 @@ Check your compute compatibility to see if your card is supported:
| 5.0 | GeForce GTX | `GTX 750 Ti` `GTX 750` `NVS 810` |
| | Quadro | `K2200` `K1200` `K620` `M1200` `M520` `M5000M` `M4000M` `M3000M` `M2000M` `M1000M` `K620M` `M600M` `M500M` |
+For building locally to support older GPUs, see [developer.md](./development.md#linux-cuda-nvidia)
### GPU Selection
diff --git a/gpu/amd_linux.go b/gpu/amd_linux.go
index 6b08ac2e..768fb97a 100644
--- a/gpu/amd_linux.go
+++ b/gpu/amd_linux.go
@@ -159,7 +159,11 @@ func AMDGetGPUInfo() []GpuInfo {
return []GpuInfo{}
}
- if int(major) < RocmComputeMin {
+ minVer, err := strconv.Atoi(RocmComputeMajorMin)
+ if err != nil {
+ slog.Error("invalid RocmComputeMajorMin setting", "value", RocmComputeMajorMin, "error", err)
+ }
+ if int(major) < minVer {
slog.Warn(fmt.Sprintf("amdgpu too old gfx%d%x%x", major, minor, patch), "gpu", gpuID)
continue
}
diff --git a/gpu/gpu.go b/gpu/gpu.go
index 781e23df..60d68c33 100644
--- a/gpu/gpu.go
+++ b/gpu/gpu.go
@@ -16,6 +16,7 @@ import (
"os"
"path/filepath"
"runtime"
+ "strconv"
"strings"
"sync"
"unsafe"
@@ -38,9 +39,11 @@ const (
var gpuMutex sync.Mutex
// With our current CUDA compile flags, older than 5.0 will not work properly
-var CudaComputeMin = [2]C.int{5, 0}
+// (string values used to allow ldflags overrides at build time)
+var CudaComputeMajorMin = "5"
+var CudaComputeMinorMin = "0"
-var RocmComputeMin = 9
+var RocmComputeMajorMin = "9"
// TODO find a better way to detect iGPU instead of minimum memory
const IGPUMemLimit = 1 * format.GibiByte // 512G is what they typically report, so anything less than 1G must be iGPU
@@ -175,11 +178,57 @@ func GetGPUInfo() GpuInfoList {
var memInfo C.mem_info_t
resp := []GpuInfo{}
- // NVIDIA first
- for i := 0; i < gpuHandles.deviceCount; i++ {
- // TODO once we support CPU compilation variants of GPU libraries refine this...
- if cpuVariant == "" && runtime.GOARCH == "amd64" {
- continue
+ // Load ALL libraries
+ cHandles = initCudaHandles()
+ minMajorVer, err := strconv.Atoi(CudaComputeMajorMin)
+ if err != nil {
+ slog.Error("invalid CudaComputeMajorMin setting", "value", CudaComputeMajorMin, "error", err)
+ }
+ minMinorVer, err := strconv.Atoi(CudaComputeMinorMin)
+ if err != nil {
+ slog.Error("invalid CudaComputeMinorMin setting", "value", CudaComputeMinorMin, "error", err)
+ }
+
+ // NVIDIA
+ for i := range cHandles.deviceCount {
+ if cHandles.cudart != nil || cHandles.nvcuda != nil {
+ gpuInfo := CudaGPUInfo{
+ GpuInfo: GpuInfo{
+ Library: "cuda",
+ },
+ index: i,
+ }
+ var driverMajor int
+ var driverMinor int
+ if cHandles.cudart != nil {
+ C.cudart_bootstrap(*cHandles.cudart, C.int(i), &memInfo)
+ } else {
+ C.nvcuda_bootstrap(*cHandles.nvcuda, C.int(i), &memInfo)
+ driverMajor = int(cHandles.nvcuda.driver_major)
+ driverMinor = int(cHandles.nvcuda.driver_minor)
+ }
+ if memInfo.err != nil {
+ slog.Info("error looking up nvidia GPU memory", "error", C.GoString(memInfo.err))
+ C.free(unsafe.Pointer(memInfo.err))
+ continue
+ }
+
+ if int(memInfo.major) < minMajorVer || (int(memInfo.major) == minMajorVer && int(memInfo.minor) < minMinorVer) {
+ slog.Info(fmt.Sprintf("[%d] CUDA GPU is too old. Compute Capability detected: %d.%d", i, memInfo.major, memInfo.minor))
+ continue
+ }
+ gpuInfo.TotalMemory = uint64(memInfo.total)
+ gpuInfo.FreeMemory = uint64(memInfo.free)
+ gpuInfo.ID = C.GoString(&memInfo.gpu_id[0])
+ gpuInfo.Compute = fmt.Sprintf("%d.%d", memInfo.major, memInfo.minor)
+ gpuInfo.MinimumMemory = cudaMinimumMemory
+ gpuInfo.DependencyPath = depPath
+ gpuInfo.Name = C.GoString(&memInfo.gpu_name[0])
+ gpuInfo.DriverMajor = driverMajor
+ gpuInfo.DriverMinor = driverMinor
+
+ // TODO potentially sort on our own algorithm instead of what the underlying GPU library does...
+ cudaGPUs = append(cudaGPUs, gpuInfo)
}
gpuInfo := GpuInfo{
Library: "cuda",
diff --git a/scripts/build_docker.sh b/scripts/build_docker.sh
index e91c56ed..c03bc25f 100755
--- a/scripts/build_docker.sh
+++ b/scripts/build_docker.sh
@@ -3,7 +3,7 @@
set -eu
export VERSION=${VERSION:-$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g")}
-export GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=$VERSION\" \"-X=github.com/ollama/ollama/server.mode=release\"'"
+export GOFLAGS=${GOFLAGS:-"'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=$VERSION\" \"-X=github.com/ollama/ollama/server.mode=release\"'"}
# We use 2 different image repositories to handle combining architecture images into multiarch manifest
# (The ROCm image is x86 only and is not a multiarch manifest)
diff --git a/scripts/build_linux.sh b/scripts/build_linux.sh
index 27c4ff1f..e7e6d0dd 100755
--- a/scripts/build_linux.sh
+++ b/scripts/build_linux.sh
@@ -3,7 +3,7 @@
set -eu
export VERSION=${VERSION:-$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g")}
-export GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=$VERSION\" \"-X=github.com/ollama/ollama/server.mode=release\"'"
+export GOFLAGS=${GOFLAGS:-"'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=$VERSION\" \"-X=github.com/ollama/ollama/server.mode=release\"'"}
BUILD_ARCH=${BUILD_ARCH:-"amd64 arm64"}
export AMDGPU_TARGETS=${AMDGPU_TARGETS:=""}
--
2.46.0

View file

@ -0,0 +1,79 @@
{
config,
lib,
pkgs,
meta,
name,
...
}:
lib.extra.mkConfig {
enabledModules = [
# INFO: This list needs to stay sorted alphabetically
];
enabledServices = [
# INFO: This list needs to stay sorted alphabetically
# Machine learning API machine
"microvm-ml01"
"microvm-router01"
"nvidia-tesla-k80"
"proxmox"
];
extraConfig = {
microvm = {
host.enable = true;
};
dgn-hardware = {
useZfs = true;
zfsPools = [
"dpool"
"ppool0"
];
};
services.netbird.enable = true;
# We are going to use CUDA here.
nixpkgs.config.cudaSupport = true;
hardware.graphics.enable = true;
environment.systemPackages = [
((pkgs.openai-whisper-cpp.override { cudaPackages = pkgs.cudaPackages_11; }).overrideAttrs (old: {
src = pkgs.fetchFromGitHub {
owner = "ggerganov";
repo = "whisper.cpp";
rev = "v1.7.1";
hash = "sha256-EDFUVjud79ZRCzGbOh9L9NcXfN3ikvsqkVSOME9F9oo=";
};
env = {
WHISPER_CUBLAS = "";
GGML_CUDA = "1";
};
# We only need Compute Capability 3.7.
CUDA_ARCH_FLAGS = [ "sm_37" ];
# We are GPU-only anyway.
patches = (old.patches or [ ]) ++ [
./no-weird-microarch.patch
./all-nvcc-arch.patch
];
}))
];
services = {
ollama = {
enable = true;
host = meta.network.${name}.netbirdIp;
package = pkgs.callPackage ./ollama.nix {
cudaPackages = pkgs.cudaPackages_11;
# We need to thread our nvidia x11 driver for CUDA.
extraLibraries = [ config.hardware.nvidia.package ];
};
};
};
networking.firewall.interfaces.wt0.allowedTCPPorts = [ config.services.ollama.port ];
};
root = ./.;
}

View file

@ -0,0 +1,50 @@
{
config,
lib,
modulesPath,
...
}:
{
imports = [ (modulesPath + "/installer/scan/not-detected.nix") ];
boot = {
initrd = {
availableKernelModules = [
"ehci_pci"
"ahci"
"mpt3sas"
"usbhid"
"sd_mod"
];
kernelModules = [ ];
};
kernelModules = [ "kvm-intel" ];
extraModulePackages = [ ];
};
fileSystems."/" = {
device = "/dev/disk/by-uuid/92bf4d66-2693-4eca-9b26-f86ae09d468d";
fsType = "ext4";
};
boot.initrd.luks.devices."mainfs" = {
device = "/dev/disk/by-uuid/26f9737b-28aa-4c3f-bd3b-b028283cef88";
keyFileSize = 1;
keyFile = "/dev/zero";
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/280C-8844";
fsType = "vfat";
options = [
"fmask=0022"
"dmask=0022"
];
};
swapDevices = [ ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View file

@ -0,0 +1,26 @@
From 2278389ef9ac9231349440aa68f9544ddc69cdc7 Mon Sep 17 00:00:00 2001
From: Raito Bezarius <masterancpp@gmail.com>
Date: Wed, 9 Oct 2024 13:37:08 +0200
Subject: [PATCH] fix: sm_37 for nvcc
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
---
Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 2ccb750..70dfd9b 100644
--- a/Makefile
+++ b/Makefile
@@ -537,7 +537,7 @@ endif #GGML_CUDA_NVCC
ifdef CUDA_DOCKER_ARCH
MK_NVCCFLAGS += -Wno-deprecated-gpu-targets -arch=$(CUDA_DOCKER_ARCH)
else ifndef CUDA_POWER_ARCH
- MK_NVCCFLAGS += -arch=native
+ MK_NVCCFLAGS += -arch=sm_37
endif # CUDA_DOCKER_ARCH
ifdef GGML_CUDA_FORCE_DMMV
--
2.46.0

View file

@ -0,0 +1,20 @@
diff --git c/llm/generate/gen_common.sh i/llm/generate/gen_common.sh
index 3825c155..238a74a7 100644
--- c/llm/generate/gen_common.sh
+++ i/llm/generate/gen_common.sh
@@ -69,6 +69,7 @@ git_module_setup() {
}
apply_patches() {
+ return
# apply temporary patches until fix is upstream
for patch in ../patches/*.patch; do
git -c 'user.name=nobody' -c 'user.email=<>' -C ${LLAMACPP_DIR} am ${patch}
@@ -133,6 +134,7 @@ install() {
# Keep the local tree clean after we're done with the build
cleanup() {
+ return
(cd ${LLAMACPP_DIR}/ && git checkout CMakeLists.txt)
if [ -n "$(ls -A ../patches/*.diff)" ]; then

View file

@ -0,0 +1,22 @@
_: {
microvm.autostart = [ "ml01" ];
microvm.vms.ml01 = {
config = {
networking.hostName = "ml01";
microvm = {
hypervisor = "cloud-hypervisor";
vcpu = 4;
mem = 4096;
balloonMem = 2048;
shares = [
{
source = "/nix/store";
mountPoint = "/nix/.ro-store";
tag = "ro-store";
proto = "virtiofs";
}
];
};
};
};
}

View file

@ -0,0 +1,16 @@
_: {
microvm.autostart = [ "router01" ];
microvm.vms.router01 = {
config = {
networking.hostName = "router01";
microvm.shares = [
{
source = "/nix/store";
mountPoint = "/nix/.ro-store";
tag = "ro-store";
proto = "virtiofs";
}
];
};
};
}

View file

@ -0,0 +1,34 @@
From 51568b61ef63ecd97867562571411082c32751d3 Mon Sep 17 00:00:00 2001
From: Raito Bezarius <masterancpp@gmail.com>
Date: Wed, 9 Oct 2024 13:36:51 +0200
Subject: [PATCH] fix: avx & f16c in Makefile
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
---
Makefile | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/Makefile b/Makefile
index 32b7cbb..2ccb750 100644
--- a/Makefile
+++ b/Makefile
@@ -361,12 +361,12 @@ ifndef RISCV
ifeq ($(UNAME_M),$(filter $(UNAME_M),x86_64 i686 amd64))
# Use all CPU extensions that are available:
- MK_CFLAGS += -march=native -mtune=native
- HOST_CXXFLAGS += -march=native -mtune=native
+ # MK_CFLAGS += -march=native -mtune=native
+ # HOST_CXXFLAGS += -march=native -mtune=native
# Usage AVX-only
- #MK_CFLAGS += -mfma -mf16c -mavx
- #MK_CXXFLAGS += -mfma -mf16c -mavx
+ MK_CFLAGS += -mf16c -mavx
+ MK_CXXFLAGS += -mf16c -mavx
# Usage SSSE3-only (Not is SSE3!)
#MK_CFLAGS += -mssse3
--
2.46.0

View file

@ -0,0 +1,8 @@
{ config, ... }:
{
nixpkgs.config.nvidia.acceptLicense = true;
# Tesla K80 is not supported by the latest driver.
hardware.nvidia.package = config.boot.kernelPackages.nvidia_x11_legacy470;
# Don't ask.
services.xserver.videoDrivers = [ "nvidia" ];
}

243
machines/krz01/ollama.nix Normal file
View file

@ -0,0 +1,243 @@
{
lib,
buildGoModule,
fetchFromGitHub,
buildEnv,
linkFarm,
overrideCC,
makeWrapper,
stdenv,
addDriverRunpath,
nix-update-script,
cmake,
gcc11,
clblast,
libdrm,
rocmPackages,
cudaPackages,
darwin,
autoAddDriverRunpath,
extraLibraries ? [ ],
nixosTests,
testers,
ollama,
ollama-rocm,
ollama-cuda,
config,
# one of `[ null false "rocm" "cuda" ]`
acceleration ? null,
}:
assert builtins.elem acceleration [
null
false
"rocm"
"cuda"
];
let
pname = "ollama";
version = "2024-09-10-cc35";
src = fetchFromGitHub {
owner = "aliotard";
repo = "ollama";
rev = "34827c01f7723c7f5f9f5e392fe85f5a4a5d5fc0";
hash = "sha256-xFNuqcW7YWeyCyw5QLBnCHHTSMITR6LJkJT0CXZC+Y8=";
fetchSubmodules = true;
};
vendorHash = "sha256-hSxcREAujhvzHVNwnRTfhi0MKI3s8HNavER2VLz6SYk=";
validateFallback = lib.warnIf (config.rocmSupport && config.cudaSupport) (lib.concatStrings [
"both `nixpkgs.config.rocmSupport` and `nixpkgs.config.cudaSupport` are enabled, "
"but they are mutually exclusive; falling back to cpu"
]) (!(config.rocmSupport && config.cudaSupport));
shouldEnable =
mode: fallback: (acceleration == mode) || (fallback && acceleration == null && validateFallback);
rocmRequested = shouldEnable "rocm" config.rocmSupport;
cudaRequested = shouldEnable "cuda" config.cudaSupport;
enableRocm = rocmRequested && stdenv.isLinux;
enableCuda = cudaRequested && stdenv.isLinux;
rocmLibs = [
rocmPackages.clr
rocmPackages.hipblas
rocmPackages.rocblas
rocmPackages.rocsolver
rocmPackages.rocsparse
rocmPackages.rocm-device-libs
rocmPackages.rocm-smi
];
rocmClang = linkFarm "rocm-clang" { llvm = rocmPackages.llvm.clang; };
rocmPath = buildEnv {
name = "rocm-path";
paths = rocmLibs ++ [ rocmClang ];
};
cudaLibs = [
cudaPackages.cuda_cudart
cudaPackages.libcublas
cudaPackages.cuda_cccl
];
cudaToolkit = buildEnv {
name = "cuda-merged";
paths = map lib.getLib cudaLibs ++ [
(lib.getOutput "static" cudaPackages.cuda_cudart)
(lib.getBin (cudaPackages.cuda_nvcc.__spliced.buildHost or cudaPackages.cuda_nvcc))
];
};
metalFrameworks = with darwin.apple_sdk_11_0.frameworks; [
Accelerate
Metal
MetalKit
MetalPerformanceShaders
];
wrapperOptions =
[
# ollama embeds llama-cpp binaries which actually run the ai models
# these llama-cpp binaries are unaffected by the ollama binary's DT_RUNPATH
# LD_LIBRARY_PATH is temporarily required to use the gpu
# until these llama-cpp binaries can have their runpath patched
"--suffix LD_LIBRARY_PATH : '${addDriverRunpath.driverLink}/lib'"
"--suffix LD_LIBRARY_PATH : '${lib.makeLibraryPath (map lib.getLib extraLibraries)}'"
]
++ lib.optionals enableRocm [
"--suffix LD_LIBRARY_PATH : '${rocmPath}/lib'"
"--set-default HIP_PATH '${rocmPath}'"
]
++ lib.optionals enableCuda [
"--suffix LD_LIBRARY_PATH : '${lib.makeLibraryPath (map lib.getLib cudaLibs)}'"
];
wrapperArgs = builtins.concatStringsSep " " wrapperOptions;
goBuild =
if enableCuda then buildGoModule.override { stdenv = overrideCC stdenv gcc11; } else buildGoModule;
inherit (lib) licenses platforms maintainers;
in
goBuild {
inherit
pname
version
src
vendorHash
;
env =
lib.optionalAttrs enableRocm {
ROCM_PATH = rocmPath;
CLBlast_DIR = "${clblast}/lib/cmake/CLBlast";
}
// lib.optionalAttrs enableCuda { CUDA_LIB_DIR = "${cudaToolkit}/lib"; }
// {
CMAKE_CUDA_ARCHITECTURES = "35;37";
};
nativeBuildInputs =
[ cmake ]
++ lib.optionals enableRocm [ rocmPackages.llvm.bintools ]
++ lib.optionals enableCuda [ cudaPackages.cuda_nvcc ]
++ lib.optionals (enableRocm || enableCuda) [
makeWrapper
autoAddDriverRunpath
]
++ lib.optionals stdenv.isDarwin metalFrameworks;
buildInputs =
lib.optionals enableRocm (rocmLibs ++ [ libdrm ])
++ lib.optionals enableCuda cudaLibs
++ lib.optionals stdenv.isDarwin metalFrameworks;
patches = [
# disable uses of `git` in the `go generate` script
# ollama's build script assumes the source is a git repo, but nix removes the git directory
# this also disables necessary patches contained in `ollama/llm/patches/`
# those patches are applied in `postPatch`
./disable-git.patch
];
postPatch = ''
# replace inaccurate version number with actual release version
substituteInPlace version/version.go --replace-fail 0.0.0 '${version}'
# apply ollama's patches to `llama.cpp` submodule
for diff in llm/patches/*; do
patch -p1 -d llm/llama.cpp < $diff
done
'';
overrideModAttrs = _: _: {
# don't run llama.cpp build in the module fetch phase
preBuild = "";
};
preBuild = ''
# disable uses of `git`, since nix removes the git directory
export OLLAMA_SKIP_PATCHING=true
# build llama.cpp libraries for ollama
go generate ./...
'';
postFixup =
''
# the app doesn't appear functional at the moment, so hide it
mv "$out/bin/app" "$out/bin/.ollama-app"
''
+ lib.optionalString (enableRocm || enableCuda) ''
# expose runtime libraries necessary to use the gpu
wrapProgram "$out/bin/ollama" ${wrapperArgs}
'';
ldflags = [
"-s"
"-w"
"-X=github.com/ollama/ollama/version.Version=${version}"
"-X=github.com/ollama/ollama/server.mode=release"
"-X=github.com/ollama/ollama/gpu.CudaComputeMajorMin=3"
"-X=github.com/ollama/ollama/gpu.CudaComputeMinorMin=5"
];
passthru = {
tests =
{
inherit ollama;
version = testers.testVersion {
inherit version;
package = ollama;
};
}
// lib.optionalAttrs stdenv.isLinux {
inherit ollama-rocm ollama-cuda;
service = nixosTests.ollama;
service-cuda = nixosTests.ollama-cuda;
service-rocm = nixosTests.ollama-rocm;
};
updateScript = nix-update-script { };
};
meta = {
description =
"Get up and running with large language models locally"
+ lib.optionalString rocmRequested ", using ROCm for AMD GPU acceleration"
+ lib.optionalString cudaRequested ", using CUDA for NVIDIA GPU acceleration";
homepage = "https://github.com/ollama/ollama";
changelog = "https://github.com/ollama/ollama/releases/tag/v${version}";
license = licenses.mit;
platforms = if (rocmRequested || cudaRequested) then platforms.linux else platforms.unix;
mainProgram = "ollama";
maintainers = with maintainers; [
abysssol
dit7ya
elohmeier
roydubnium
];
};
}

View file

@ -0,0 +1,14 @@
{ sources, lib, ... }:
let
proxmox-nixos = import sources.proxmox-nixos;
in
{
imports = [ proxmox-nixos.nixosModules.proxmox-ve ];
services.proxmox-ve.enable = true;
nixpkgs.overlays = [ proxmox-nixos.overlays.x86_64-linux ];
networking.firewall = {
trustedInterfaces = [ "wt0" ];
allowedTCPPorts = lib.mkForce [ 22 ];
};
}

View file

@ -0,0 +1,3 @@
(import ../../../keys).mkSecrets [ "krz01" ] [
# List of secrets for krz01
]

View file

@ -1,5 +1,4 @@
let (import ../../../keys).mkSecrets [ "rescue01" ] [
lib = import ../../../lib { }; # List of secrets for rescue01
publicKeys = lib.getNodeKeys "rescue01"; "stateless-uptime-kuma-password"
in ]
lib.setDefault { inherit publicKeys; } [ "stateless-uptime-kuma-password" ]

View file

@ -9,7 +9,6 @@ lib.extra.mkConfig {
enabledServices = [ enabledServices = [
# List of services to enable # List of services to enable
"atticd"
"tvix-cache" "tvix-cache"
"forgejo" "forgejo"
"forgejo-runners" "forgejo-runners"

View file

@ -1,82 +0,0 @@
{ config, nixpkgs, ... }:
let
host = "cachix.dgnum.eu";
in
{
services = {
atticd = {
enable = true;
credentialsFile = config.age.secrets."atticd-credentials_file".path;
settings = {
listen = "127.0.0.1:9099";
api-endpoint = "https://${host}/";
allowed-hosts = [ host ];
chunking = {
# The minimum NAR size to trigger chunking
#
# If 0, chunking is disabled entirely for newly-uploaded NARs.
# If 1, all NARs are chunked.
nar-size-threshold = 0; # 64 KiB
# The preferred minimum size of a chunk, in bytes
min-size = 16 * 1024; # 16 KiB
# The preferred average size of a chunk, in bytes
avg-size = 64 * 1024; # 64 KiB
# The preferred maximum size of a chunk, in bytes
max-size = 256 * 1024; # 256 KiB
};
database.url = "postgresql://atticd?host=/run/postgresql";
storage = {
type = "s3";
region = "garage";
bucket = "attic-dgnum";
endpoint = "https://s3.dgnum.eu";
};
};
useFlakeCompatOverlay = false;
package = nixpkgs.unstable.attic-server;
};
nginx = {
enable = true;
virtualHosts.${host} = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://127.0.0.1:9099";
extraConfig = ''
client_max_body_size 10G;
'';
};
};
};
postgresql = {
enable = true;
ensureDatabases = [ "atticd" ];
ensureUsers = [
{
name = "atticd";
ensureDBOwnership = true;
}
];
};
};
systemd.services.atticd.environment.RUST_LOG = "warn";
}

View file

@ -15,6 +15,8 @@ let
]; ];
buckets = [ buckets = [
"monorepo-terraform-state"
"banda-website" "banda-website"
"castopod-dgnum" "castopod-dgnum"
"hackens-website" "hackens-website"
@ -28,14 +30,14 @@ in
services.garage = { services.garage = {
enable = true; enable = true;
package = pkgs.garage_0_9; package = pkgs.garage_1_0_1;
settings = { settings = {
inherit data_dir metadata_dir; inherit data_dir metadata_dir;
db_engine = "lmdb"; db_engine = "lmdb";
replication_mode = "none"; replication_mode = "none"; # TODO: deprecated
compression_level = 7; compression_level = 7;
rpc_bind_addr = "[::]:3901"; rpc_bind_addr = "[::]:3901";
@ -67,7 +69,7 @@ in
data_dir data_dir
metadata_dir metadata_dir
]; ];
TimeoutSec = 3000; TimeoutSec = 600;
}; };
users.users.garage = { users.users.garage = {
@ -77,6 +79,17 @@ in
users.groups.garage = { }; users.groups.garage = { };
services.nginx.virtualHosts = { services.nginx.virtualHosts = {
"s3-admin.dgnum.eu" = {
enableACME = true;
forceSSL = true;
locations."/".extraConfig = ''
proxy_pass http://127.0.0.1:3903;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
'';
};
${host} = { ${host} = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;

View file

@ -1,30 +0,0 @@
age-encryption.org/v1
-> ssh-ed25519 jIXfPA HECtxDO0OV6To/Qs3A+2N8+3xqsHp6pz6d4ArgsgXS4
mnmDwWZ6d1aW5Qejzv2Jo112ee78wKVx90R7r5wQbYo
-> ssh-ed25519 QlRB9Q Rx3bV/DkoCCvQCMwJGOfibG8Rif5Ap+W6EqWlFOhUQc
jxEFUWqxedwIK3mNyOG+5dyFFZbJZ3XNFXnk0fe0vyw
-> ssh-ed25519 r+nK/Q J591Cg/4oP26LT7Tl/wrdDipR/gpg1WMsiKJN0ygbjw
WToE5xtuF2FOqtvRgz1SZStYGjTsKRxguIioan+vluU
-> ssh-rsa krWCLQ
hhp33AzK6wYWM6k7ZroV0J5i8C5MQXjQY9sksPQdABRQUd6XTmYOIOdA0ste0EA9
hqbbHQwbFy0oE/QKfnUZWbgJo5Us1DWKxip55L875CPfVcmxvC2ADRO5JKKNkQa/
P4zBALPqf+BXrafcGN4hT8D9gywIWdQ2zPSpKbJE+OdPcUrBVH/ndMUVoLfTEKL9
B3XgqRvLNkgsdu7FMEPnelWT3WrxkBME7AathdXcEYXSxiTmaKqxDzRtcNLdh+y2
6XfQU6lLMT+WWPD/Ro7UzLrWUnFJMYK0SinkOuX+PKxMq95lCc5kI3tZ7JL7bC5E
vBGnX9w0unyR//LLqrOPWA
-> ssh-ed25519 /vwQcQ eYSTWAYs/L+cYt/16TrKaIqoc9TFJQncM02Vd8hOg3A
lWalXa1ZBtrjXOB+sznWCjStFHF4ulLaBilEc3b7qWc
-> ssh-ed25519 0R97PA 78K7uF/mXT4pgTbnmfpyxY2czgs+DNueusuatUx7MCQ
C/pWPdVCWZuHFuM5fzJHdGZomM3Wbt22iwfLbLSznh0
-> ssh-ed25519 JGx7Ng xFzEGNVIiC0cXCbcSKUfmVLAdRBH7xu6/2E7nVoRwjI
+TgvIl03KGm5N55+jGc7UcyRHjMvAFm3Kbvx5Ma4HQ4
-> ssh-ed25519 5SY7Kg 7YO/crKVWSsr3Hy5HPr0/R3oPdCA2kWduZYeSlcxGnI
N0IpdylU+3ybInseGSKPONxeNr8mh/ZlBGCvY2c0WTA
-> ssh-ed25519 p/Mg4Q y1ekwzz3sSHGrLmb0NqF6VWfalARy+PykE77hVqD7Xc
0s9QrDsLH6XdzetyIXJEB2MrwwUi8CDpu7SEemm8zJ4
-> ssh-ed25519 rHotTw 7SMzV/pEmDISPL/fMjafXM3URZpbUPTg+9AngZ0GZTc
eIi1+i9JVBLvfQMkmMv5S0N8qgwVtyklX/J+6MdtlSc
--- Gjl7lNWG9gyMlg256Oa5i5bFLm1Cup1upjsEDVurgDo
uÂ;.ÿñË>pÔïÑ<C391>òh¸<68>2ÎŒ}£PJ4èú‘©‰Ñ×íè==#¯¾Úÿ¹8e¤UÊÉŠÇ$ 1»!z<E28093>jlA‡[@;òs®<>ŒÉáAB±á-§Rå=È0Ò·d“ðµú†Ê¢þ{«ÒF¹—hòà ù@%ˆŠä´›|×{ ¢åeÚÝÛ¯âøsbë«]Óèå¨ø.m8 8Bn"(Ûæ¤âïW½í!zxn\Ã(5:ïíÒÞ-ZDËÇÃ)}HŠü˜¦×ál}Sƒ˜ëFrn
øL¦-wÉÑ—¼j)ê â¶èÐ&:¥îÓCÞÆ2ÝÒÅÀÏB»ÛzïàŽŸt•WÍ!£8|lïí0
¾¸y8óÃkñbÔy×ËäÏ臃¹·k¤¨ÉÍ™ê°n/-'ÃZ<C383>ÅŸ ¾îƾ\Ûâê‰ù†uŸÍeu®"E ±/d

View file

@ -1,9 +1,5 @@
let (import ../../../keys).mkSecrets [ "storage01" ] [
lib = import ../../../lib { }; # List of secrets for storage01
publicKeys = lib.getNodeKeys "storage01";
in
lib.setDefault { inherit publicKeys; } [
"atticd-credentials_file"
"bupstash-put_key" "bupstash-put_key"
"forgejo-mailer_password_file" "forgejo-mailer_password_file"
"forgejo_runners-token_file" "forgejo_runners-token_file"

View file

@ -0,0 +1,14 @@
let
cache-info = {
infra = {
public-key = "infra.tvix-store.dgnum.eu-1:8CAY64o3rKjyw2uA5mzr/aTzstnc+Uj4g8OC6ClG1m8=";
url = "https://tvix-store.dgnum.eu/infra";
};
};
in
{ caches }:
{
trusted-substituters = builtins.map (cache: cache-info.${cache}.url) caches;
trusted-public-keys = builtins.map (cache: cache-info.${cache}.public-key) caches;
}

View file

@ -1,9 +1,13 @@
{ pkgs, config, ... }: { pkgs, config, ... }:
let let
settingsFormat = pkgs.formats.toml { };
dataDir = "/data/slow/tvix-store";
# How to add a cache:
# - Add the relevant services (likely only a pathinfoservice) to the
# composition config (store-config.composition).
# - Add an endpoint (store-config.endpoints).
# - Append a proxy configuration to nginx in order to make the store
# accessible.
# - Update cache-info.nix so users can add the cache to their configuration
store-config = { store-config = {
composition = { composition = {
blobservices.default = { blobservices.default = {
@ -54,6 +58,13 @@ let
}; };
}; };
}; };
settingsFormat = pkgs.formats.toml { };
webHost = "tvix-store.dgnum.eu";
dataDir = "/data/slow/tvix-store";
systemdHardening = { systemdHardening = {
PrivateDevices = true; PrivateDevices = true;
PrivateTmp = true; PrivateTmp = true;
@ -70,10 +81,12 @@ let
RuntimeDirectoryMode = "0750"; RuntimeDirectoryMode = "0750";
StateDirectoryMode = "0750"; StateDirectoryMode = "0750";
}; };
toml = { toml = {
composition = settingsFormat.generate "composition.toml" store-config.composition; composition = settingsFormat.generate "composition.toml" store-config.composition;
endpoints = settingsFormat.generate "endpoints.toml" store-config.endpoints; endpoints = settingsFormat.generate "endpoints.toml" store-config.endpoints;
}; };
package = pkgs.callPackage ./package { }; package = pkgs.callPackage ./package { };
in in
{ {
@ -83,7 +96,7 @@ in
"nginx" "nginx"
]; ];
services.nginx.virtualHosts."tvix-store.dgnum.eu" = { services.nginx.virtualHosts.${webHost} = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
locations = { locations = {
@ -105,14 +118,12 @@ in
auth_basic_user_file ${config.age.secrets."nginx-tvix-store-password-ci".path}; auth_basic_user_file ${config.age.secrets."nginx-tvix-store-password-ci".path};
''; '';
}; };
"/.well-known/nix-signing-keys/" = {
alias = "${./pubkeys}/";
extraConfig = "autoindex on;";
};
}; };
}; };
# TODO add tvix-store cli here # TODO add tvix-store cli here
# environment.systemPackages = [ ]; # environment.systemPackages = [ ];
users.users.tvix-store = { users.users.tvix-store = {
isSystemUser = true; isSystemUser = true;
group = "tvix-store"; group = "tvix-store";

View file

@ -1 +0,0 @@
infra.tvix-store.dgnum.eu-1:8CAY64o3rKjyw2uA5mzr/aTzstnc+Uj4g8OC6ClG1m8=

View file

@ -238,7 +238,11 @@ in
content = '' content = ''
chain postrouting { chain postrouting {
type nat hook postrouting priority 100; type nat hook postrouting priority 100;
ip saddr 10.0.0.0/16 ether saddr 5c:64:8e:f4:09:06 snat ip to 129.199.195.130-129.199.195.158 ip saddr 10.0.0.0/16 ip saddr != 10.0.255.0/24 snat ip to 129.199.195.130-129.199.195.158
ether saddr e0:2b:e9:b5:b4:cc snat to 129.199.195.130 comment "Elias"
ether saddr { 1c:1b:b5:14:9c:e5, e6:ce:e2:b6:e3:82 } snat to 129.199.195.131 comment "Lubin"
ether saddr d0:49:7c:46:f6:39 snat to 129.199.195.132 comment "Jean-Marc"
ether saddr { 5c:64:8e:f4:09:06 } snat to 129.199.195.158 comment "APs"
} }
''; '';
}; };

View file

@ -1,8 +1,5 @@
let (import ../../../keys).mkSecrets [ "vault01" ] [
lib = import ../../../lib { }; # List of secrets for vault01
publicKeys = lib.getNodeKeys "vault01";
in
lib.setDefault { inherit publicKeys; } [
"radius-auth_token_file" "radius-auth_token_file"
"radius-ca_pem_file" "radius-ca_pem_file"
"radius-cert_pem_file" "radius-cert_pem_file"

View file

@ -1,8 +1,5 @@
let (import ../../../keys).mkSecrets [ "web01" ] [
lib = import ../../../lib { }; # List of secrets for web01
publicKeys = lib.getNodeKeys "web01";
in
lib.setDefault { inherit publicKeys; } [
"acme-certs_secret" "acme-certs_secret"
"bupstash-put_key" "bupstash-put_key"
"matterbridge-config_file" "matterbridge-config_file"

View file

@ -1,4 +1,14 @@
diff --git a/cas_server/tests/test_federate.py b/cas_server/tests/test_federate.py diff --git a/setup.py b/setup.py
index 7c7b02d..3f677ff 100644
--- a/setup.py
+++ b/setup.py
@@ -67,6 +67,4 @@ if __name__ == '__main__':
url="https://github.com/nitmir/django-cas-server",
download_url="https://github.com/nitmir/django-cas-server/releases/latest",
zip_safe=False,
- setup_requires=['pytest-runner'],
- tests_require=['pytest', 'pytest-django', 'pytest-pythonpath', 'pytest-warnings', 'mock>=1'],
)
index 2b389d3..dcdfafd 100644 index 2b389d3..dcdfafd 100644
--- a/cas_server/tests/test_federate.py --- a/cas_server/tests/test_federate.py
+++ b/cas_server/tests/test_federate.py +++ b/cas_server/tests/test_federate.py

View file

@ -1,7 +1,5 @@
let (import ../../../keys).mkSecrets [ "web02" ] [
lib = import ../../../lib { }; # List of secrets for web02
in
lib.setDefault { publicKeys = lib.getNodeKeys "web02"; } [
"cas_eleves-secret_key_file" "cas_eleves-secret_key_file"
"kadenios-secret_key_file" "kadenios-secret_key_file"
"kadenios-email_password_file" "kadenios-email_password_file"

View file

@ -68,6 +68,12 @@ let
"support" # Zammad support "support" # Zammad support
"telegraf" # Telegraf "telegraf" # Telegraf
# Beta-grade machine learning API servers
"ollama01.beta"
"openui.beta"
"whisper.beta"
"stable-diffusion.beta"
# DGSI # DGSI
"dgsi" "dgsi"
"profil" "profil"
@ -87,6 +93,8 @@ let
"*.s3" "*.s3"
"cdn" "cdn"
"s3" "s3"
# The administration endpoint for Garage.
"s3-admin"
]; ];
rescue01.dual = [ rescue01.dual = [

View file

@ -29,6 +29,29 @@
netbirdIp = "100.80.75.197"; netbirdIp = "100.80.75.197";
}; };
krz01 = {
interfaces = {
eno1 = {
ipv4 = [
{
address = "129.199.146.21";
prefixLength = 24;
}
{
address = "192.168.1.145";
prefixLength = 24;
}
];
gateways = [ "129.199.146.254" ];
enableDefaultDNS = true;
};
};
hostId = "bd11e8fc";
netbirdIp = "100.80.103.206";
};
geo01 = { geo01 = {
interfaces = { interfaces = {
eno1 = { eno1 = {

View file

@ -22,6 +22,8 @@
bridge01 = { bridge01 = {
site = "hyp01"; site = "hyp01";
hashedPassword = "$y$j9T$EPJdz70kselouXAVUmAH01$8nYbUBY9NPTMfYigegY0qFSdxJwhqzW8sFacDqEYCP5";
stateVersion = "24.05"; stateVersion = "24.05";
adminGroups = [ "fai" ]; adminGroups = [ "fai" ];
@ -30,7 +32,7 @@
targetHost = "fd26:baf9:d250:8000::ffff"; targetHost = "fd26:baf9:d250:8000::ffff";
sshOptions = [ sshOptions = [
"-J" "-J"
"vault01.hyp01.infra.dgnum.eu" "root@vault01.hyp01.infra.dgnum.eu"
]; ];
}; };
}; };
@ -40,6 +42,8 @@
deployment.tags = [ "web" ]; deployment.tags = [ "web" ];
hashedPassword = "$y$j9T$9YqXO93VJE/GP3z8Sh4h51$hrBsEPL2O1eP/wBZTrNT8XV906V4JKbQ0g04IWBcyd2";
stateVersion = "23.05"; stateVersion = "23.05";
vm-cluster = "Hyperviseur NPS"; vm-cluster = "Hyperviseur NPS";
@ -49,6 +53,8 @@
compute01 = { compute01 = {
site = "pav01"; site = "pav01";
hashedPassword = "$y$j9T$2nxZHq84G7fWvWMEaGavE/$0ADnmD9qMpXJJ.rWWH9086EakvZ3wAg0mSxZYugOf3C";
stateVersion = "23.05"; stateVersion = "23.05";
nix-modules = [ "services/stirling-pdf" ]; nix-modules = [ "services/stirling-pdf" ];
nixpkgs = "24.05"; nixpkgs = "24.05";
@ -58,6 +64,8 @@
site = "oik01"; site = "oik01";
deployment.tags = [ "geo" ]; deployment.tags = [ "geo" ];
hashedPassword = "$y$j9T$2XmDpJu.QLhV57yYCh5Lf1$LK.X0HKB02Q0Ujvhj5nIofW2IRrIAL/Uxnvl9AXM1L8";
stateVersion = "24.05"; stateVersion = "24.05";
nixpkgs = "24.05"; nixpkgs = "24.05";
}; };
@ -66,12 +74,28 @@
site = "oik01"; site = "oik01";
deployment.tags = [ "geo" ]; deployment.tags = [ "geo" ];
hashedPassword = "$y$j9T$Q4fbMpSm9beWu4DPNAR9t0$dx/1pH4GPY72LpS5ZiECXAZFDdxwmIywztsX.qo2VVA";
stateVersion = "24.05"; stateVersion = "24.05";
nixpkgs = "24.05"; nixpkgs = "24.05";
}; };
krz01 = {
site = "pav01";
hashedPassword = "$y$j9T$eNZQgDN.J5y7KTG2hXgat1$J1i5tjx5dnSZu.C9B7swXi5zMFIkUnmRrnmyLHFAt8/";
stateVersion = "24.05";
nixpkgs = "unstable";
adminGroups = [ "lab" ];
};
storage01 = { storage01 = {
site = "pav01"; site = "pav01";
hashedPassword = "$y$j9T$tvRu1EJ9MwDSvEm0ogwe70$bKSw6nNteN0L3NOy2Yix7KlIvO/oROQmQ.Ynq002Fg8";
stateVersion = "23.11"; stateVersion = "23.11";
nixpkgs = "24.05"; nixpkgs = "24.05";
@ -82,6 +106,8 @@
site = "hyp01"; site = "hyp01";
deployment.targetHost = "vault01.hyp01.infra.dgnum.eu"; deployment.targetHost = "vault01.hyp01.infra.dgnum.eu";
hashedPassword = "$y$j9T$5osXVNxCDxu3jIndcyh7G.$UrjiDRpMu3W59tKHLGNdLWllZh.4p8IM4sBS5SrNrN1";
stateVersion = "23.11"; stateVersion = "23.11";
nixpkgs = "24.05"; nixpkgs = "24.05";
@ -91,6 +117,8 @@
web02 = { web02 = {
site = "rat01"; site = "rat01";
hashedPassword = "$y$j9T$p42UVNy78PykkQOjPwXNJ/$B/zCUOrHXVSFGUY63wnViMiSmU2vCWsiX0y62qqgNQ5";
stateVersion = "24.05"; stateVersion = "24.05";
nixpkgs = "24.05"; nixpkgs = "24.05";
vm-cluster = "Hyperviseur NPS"; vm-cluster = "Hyperviseur NPS";
@ -101,6 +129,8 @@
deployment.targetHost = "v6.rescue01.luj01.infra.dgnum.eu"; deployment.targetHost = "v6.rescue01.luj01.infra.dgnum.eu";
hashedPassword = "$y$j9T$nqoMMu/axrD0m8AlUFdbs.$UFVmIdPAOHBe2jJv5HJJTcDgINC7LTnSGRQNs9zS1mC";
stateVersion = "23.11"; stateVersion = "23.11";
vm-cluster = "Hyperviseur Luj"; vm-cluster = "Hyperviseur Luj";
}; };

View file

@ -139,6 +139,13 @@ in
''; '';
}; };
hashedPassword = mkOption {
type = str;
description = ''
The hashed password for the root account.
'';
};
admins = mkOption { admins = mkOption {
type = listOf str; type = listOf str;
default = [ ]; default = [ ];
@ -368,10 +375,10 @@ in
name: "A member of the external service ${name} admins was not found in the members list." name: "A member of the external service ${name} admins was not found in the members list."
) org.external) ) org.external)
# Check that all members have a keyFile # Check that all members have ssh keys
(builtins.map (name: { (builtins.map (name: {
assertion = builtins.pathExists "${builtins.toString ../keys}/${name}.keys"; assertion = ((import ../keys)._keys.${name} or [ ]) != [ ];
message = "No ssh keys file found for ${name}."; message = "No ssh keys found for ${name}.";
}) members) }) members)
]; ];
}; };

View file

@ -55,6 +55,12 @@
"catvayor" "catvayor"
"ecoppens" "ecoppens"
]; ];
lab = [
"catvayor"
"ecoppens"
];
}; };
external = { external = {

View file

@ -5,12 +5,6 @@ let
pkgs = import sources.nixpkgs { }; pkgs = import sources.nixpkgs { };
dns = import sources."dns.nix" { inherit pkgs; }; dns = import sources."dns.nix" { inherit pkgs; };
lib = import sources.nix-lib {
inherit (pkgs) lib;
keysRoot = ../keys;
};
in in
{ {
@ -29,6 +23,14 @@ in
pkgs.writers.writeJSON "meta.json" config; pkgs.writers.writeJSON "meta.json" config;
dns = dns.util.writeZone "dgnum.eu" ( dns = dns.util.writeZone "dgnum.eu" (
pkgs.lib.recursiveUpdate { SOA.serial = 0; } (import ./dns.nix { inherit dns lib; }) pkgs.lib.recursiveUpdate { SOA.serial = 0; } (
import ./dns.nix {
inherit dns;
lib = pkgs.lib // {
extra = import ../lib/nix-lib;
};
}
)
); );
} }

View file

@ -61,8 +61,8 @@
]) ])
++ [ ++ [
"${sources.agenix}/modules/age.nix" "${sources.agenix}/modules/age.nix"
"${sources.attic}/nixos/atticd.nix"
"${sources.arkheon}/module.nix" "${sources.arkheon}/module.nix"
"${sources."microvm.nix"}/nixos-modules/host"
] ]
++ ((import sources.nix-modules { inherit lib; }).importModules ( ++ ((import sources.nix-modules { inherit lib; }).importModules (
[ [

View file

@ -34,6 +34,7 @@
{ {
config, config,
lib, lib,
dgn-keys,
meta, meta,
nodeMeta, nodeMeta,
... ...
@ -44,6 +45,7 @@ let
mkDefault mkDefault
mkEnableOption mkEnableOption
mkIf mkIf
mkMerge
mkOption mkOption
types types
@ -78,12 +80,22 @@ in
}; };
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable (mkMerge [
# Admins have root access to the node {
dgn-access-control.users.root = mkDefault admins; # Admins have root access to the node
dgn-access-control.users.root = mkDefault admins;
users.users = builtins.mapAttrs (_: members: { users.users = builtins.mapAttrs (_: members: {
openssh.authorizedKeys.keys = lib.extra.getAllKeys members; openssh.authorizedKeys.keys = dgn-keys.getKeys members;
}) cfg.users; }) cfg.users;
}; }
{
users = {
mutableUsers = false;
users.root = {
inherit (nodeMeta) hashedPassword;
};
};
}
]);
} }

View file

@ -1,6 +1,7 @@
{ {
config, config,
lib, lib,
dgn-keys,
name, name,
... ...
}: }:
@ -103,15 +104,12 @@ in
access = [ access = [
{ {
repo = "default"; repo = "default";
keys = lib.extra.getAllKeys ( keys = dgn-keys.getKeys [
# Nodes allowed to create backups "compute01"
builtins.map (host: "machines/${host}") [ "storage01"
"compute01" "vault01"
"storage01" "web01"
"vault01" ];
"web01"
]
);
allowed = [ "put" ]; allowed = [ "put" ];
} }
]; ];
@ -121,8 +119,7 @@ in
}; };
programs.ssh.knownHosts = programs.ssh.knownHosts =
lib.extra.mapFuse lib.extra.mapFuse (host: { "${host}.dgnum".publicKey = builtins.head dgn-keys._keys.${host}; })
(host: { "${host}.dgnum".publicKey = builtins.head (lib.extra.getKeys "machines/${host}"); })
[ [
"compute01" "compute01"
"geo01" "geo01"

View file

@ -1,8 +1,4 @@
let (import ../../../keys).mkSecrets [ ] [
lib = import ../../../lib { };
in
lib.setDefault { publicKeys = lib.rootKeys; } [
"compute01.key" "compute01.key"
"storage01.key" "storage01.key"
"web01.key" "web01.key"

View file

@ -43,6 +43,7 @@ in
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable (mkMerge [
{ {
microvm.host.enable = lib.mkDefault false;
hardware.enableRedistributableFirmware = true; hardware.enableRedistributableFirmware = true;
hardware.cpu.intel.updateMicrocode = true; hardware.cpu.intel.updateMicrocode = true;

View file

@ -1 +1 @@
{ netbox-agent.publicKeys = (import ../../lib { }).machineKeys; } { netbox-agent.publicKeys = (import ../../keys).machineKeys; }

Binary file not shown.

View file

@ -1 +1 @@
{ mail.publicKeys = (import ../../lib { }).machineKeys; } { mail.publicKeys = (import ../../keys).machineKeys; }

View file

@ -1,44 +1,46 @@
age-encryption.org/v1 age-encryption.org/v1
-> ssh-ed25519 jIXfPA FhSZKBAccqBqfeayNqY3fhYSi+0NMxsxS3WsdvuVu2M -> ssh-ed25519 jIXfPA sHMGZvBA3KQ+vgyPRvthm7RrZv+cpA8rVaLMG11tWzc
xT37RUaShiHdPBUnjWntSY43LqXsR8Pgz5kUZ/mgz2w wb74jb8YFbu4hTaKECNpaCV5besptdBoXXstKd+eLTI
-> ssh-ed25519 QlRB9Q xwok3cJ6SlGxlGi/UesKHVf+O4q9mn7btLweXJzeknI -> ssh-ed25519 QlRB9Q RILFFiLngUvfSPOmw6ZLmFLVyIIQqzib7LTV8hZP/w4
LrigakDhwhHCHEaJ0eQx6TIke9vYLqXwwaUjusWOvSk na6S3iWEs3cxff30X59wD0SUNEP0/9LcuCyCUi7wgxg
-> ssh-ed25519 r+nK/Q DS8/iUfczVGxB/Hl6EkweNAGSM0ZhWqrFy4xn82QNH8 -> ssh-ed25519 r+nK/Q Mtrr3NKJG1MBw150IZK1ZTKCglktIK8mV2M7FiLz9EQ
0Z8KOLZtxh2c0JTeiPbz3ZDF3CYrDs7bmwKjjemTs0o zEEJwKeucMsZePFTZF/Cxfcuqn7KiSoBmBnNVKX1jAY
-> ssh-rsa krWCLQ -> ssh-rsa krWCLQ
CDqVAHHD/1keQdgJZX5/hkiYMpZae1MocI5LjtWWg+QDkw1Bp6bNZLou8Uc2RG0H r3OX+AaSGO0zLoEAvAo3UrtWwU/Vjyfdp+qy4haB3tpl305I6Y6O6n2iHnc1PFgw
xZIB+z1XSXf7iMla5l7RWbW+g61T38QKWoAwvAGFz+XOstBTDY4bWgSv1g6vm+6x qQ7Sa0GekbxNcwD7MzAmKbsm9wmnrF2hX03gFDI5isEPxaLC6ha207Ykauc2q1JC
XuQLxCkj4cmy3dUsvaiiQXsstuMGOWSUbp2OQWfErzoVegHVCr/XKSAI1vMwQOWN /SOZ/OUiizBUuO5OjywYz2AJUfEabmd+X1fw5QxAPSfp57KBZDJCGSpEDeJigU7M
9tJUJCKEo2DTr5OmIL7kSWguVZYy77ta7JxmGbPrNQ7LJuRoZkUgX4V37SFgDKN4 1n1XsT6eCyNDIIozRzIIyxLZU+tDDswjvjCaDJ/t2BE76LienwMRZK4P4tSn8DQP
QgpupxXP/3oDhDSzZYbS6Fw+b7U01BwPyziY1kOYztv2qSoBJFMVtZS3oJEu4ChU Jbm7bb5T2P1VAK4qIMP04DXQ861Kr2DvpLA/aPtHd9yMcZn5wQWMCVDgsL3ko0fU
7MRHaN15cGZRsC5zIQAg9w VThQwBW4qe59CCxA68TUcQ
-> ssh-ed25519 /vwQcQ ZPWBCoQ7imVFfTkUYrp4NGRnz3vskNtMgbV41F1s8BE -> ssh-ed25519 /vwQcQ KYM+4CPxNwxwh3liBBJYIqlWzpDO3h/dl54rEKQXGHU
oTrgDNisd8Sqmxo0ZDpVSO5iURWNLrIlKABjys+gHhw uteNJEqwLKUC3Gjm0BiRmb3uLb3bzRfpf3c1Da3vGjY
-> ssh-ed25519 0R97PA CgUUW9m8+M1rpsCPAPyRC8VKvilDKMA8VkDqqDfbpAs -> ssh-ed25519 0R97PA Sc9QAI4UNY6x0fZAoQOpUjzFzwev196x+7fjeIry3AU
qJ/pa3VLh6650lDN5YPyYtxsDYMiRyTtK1yu+JeF3ww puUi8W0jCbMW3cN7PjoDM+vXnHjdQ2RLfX0kdpsaWhI
-> ssh-ed25519 JGx7Ng r8OMU9Grvd8yxzzUzeEH4iCPp8NBHVcQKQe13AJOKjE -> ssh-ed25519 JGx7Ng LzO5qvnVWhF3+cR4J3nJv9IB55/FYKillkJ2jKadfQA
eYC+/VMsoetiVFTGdlAL3xDDe6WziBYU4Fr6XN/HlJI r3F+FKdpoKTB0/e5Vz5JFh9u8BKBOjn9XXE4dJEriuw
-> ssh-ed25519 5SY7Kg 4T4xlrNW8yqI23A3GH7dRDyhbUA62ldS2/R7YCsHz0U -> ssh-ed25519 5SY7Kg Uz/EgMgi0ACJStIvz06efUQpeU6VAuXVj+Veki0LkXA
ukewT84UtQcAQNNSNogi3WOjoNeA7p50D1JHJ+39lYs ukCkNIQMYbZBCBfd5R5dKWJwOcIKHzS9HN9CNk5iSF4
-> ssh-ed25519 p/Mg4Q EBlu4oYIa4hX5mGExy2xwyHbnDli9xY7MebUOr+hTzw -> ssh-ed25519 p/Mg4Q 9+IsF8fUNcQhRxRddI6WQyKP8Ky0HV4jAUvS0ySDDwM
TqmNgHL1xxyI+i4h3KgskVsWrlYUnuT5MJWcYj2crps 7WamT/OA2Os6uE/hKzWkfjlwOKQpZ6j+fcgkvsk6wCY
-> ssh-ed25519 DqHxWQ KiCWC6eJOUScSlPNpC2G2FbfD/fQ2b14KHhuw+QKNTI -> ssh-ed25519 DqHxWQ WndaDm+ApRfFj+KL5cJgJqwaZXUYrXHpQ6AxDtGb5FY
Un89T6OXiXWTBZqwdXPvyckxcBIhp2wmC4A5723b/5g u5RHgWaY28QfA3jsD54PLR50Jl5KQyVpPv4CFhLPiYI
-> ssh-ed25519 tDqJRg k5YZwwURv21NC/0tt2r3CBuUPDhfO/Y7c3ISVhMGQkA -> ssh-ed25519 tDqJRg Wgx7QpoPeendwBsWB+jAN5K+1uhxPsEHMugOPeC+Ono
sdm+SpychoEekD6JK6Wz2CCcfDpwPD6rlLyB3RJES08 CRWVWTQB2eCVSKAwIzNNaWefAmniVtF5hu8xYeTGF0Q
-> ssh-ed25519 9pVK7Q 2kUnZCmNsAu90KA+st/ZFnez8rg4zqIZ3AZQsqHW0y8 -> ssh-ed25519 9pVK7Q kB5gWwwNNcCnjN5+1j7alWzqEgYMDQ3IvA8/0ltfLwo
YlCXQ5g8vnNboPVHdSKyrdwRNvjwp9VHP+RV2WP7z00 Tp7n6v/s4swKjOqEDKEKhM8agghKEvaz+zymG+b72f8
-> ssh-ed25519 /BRpBQ w+kqiukijvXdlvKdTfVvNYv6pLTifaZeagzU1VWQLwE -> ssh-ed25519 /BRpBQ 6B5ODsRsRx8EIOrzBnAAw1bYsAQMvssSC1xxbAh+bGE
RKNPvu971viqMHBXpgE9D8L9ievWxIS5ANU8QADqwRY Xmhe74XTMwfcGvk620XixhR/6GtOt2fynSMdJ7riZxs
-> ssh-ed25519 +MNHsw m+K/VIApzxBfYxc4/dPod+9TwBBTrtGa/B28QhawAD8 -> ssh-ed25519 /x+F2Q /idVQW3v18G3e++zLmmcpZTvSW6YTfYKYX0xalx3DTU
gwJLtE5zIiNtKZ/YdroneSLLuZzvoAXaJYsqPzPkyLc ybNKGMgW5ChQU2HXHfM0Od6GWC+HRKDemibhzi+NCA4
-> ssh-ed25519 rHotTw NSgFCgFQxKc7DSrNq/77PAnAKxSG055gutF2aUUDLzA -> ssh-ed25519 +MNHsw +5EkjYR0CD0tF3jazvyz6WtzIG+84czuEsGzPmucOVI
uL3QhQHmtQrrUPllFtVf7QiLIMWkT0EYIokxUVkLMrc AqBXlugxP84nJ9jK1dPWWRJAAAzZjKl0RKd1+aXeIJg
-> ssh-ed25519 +mFdtQ otE9brZku3sOSb9IvvTW/eioWDFvMJlsxSUvOcPNwiU -> ssh-ed25519 rHotTw IzGcfj5jNooeVt7+iJwnxUfka95NVEtE9dStQUt+gCE
7vV6u7zLv2EfSz3qmY9Sboj2Z5LBwSTxrl4FWm3mYAs +lrjFHAgNOxI4JS6tGXcDSnbdn6/qwt2tI2WdVX2tO4
-> ssh-ed25519 0IVRbA kwQNIVhpFtgIlJAAoqk1fqUP9OHN9YGWcYXbT+/bHE0 -> ssh-ed25519 +mFdtQ AieFjWmv27LvUbZXCBEqmvfTQM7SLXL12qIOzZLxdi8
gDOPJMeDI2eDx+emxUNSb/MW7IRPj8ni3mOLgZV9F0Y s0qzhUO2FDqr/w8B4cbnX8NuXfZM+nv4gj6SF0DreCY
-> ssh-ed25519 IY5FSQ gtGe4X/Vx4oWn0IIUwv6qpWZ250slvT/QMdwVQQrsAQ -> ssh-ed25519 0IVRbA +S10pCaLByp+UrfbZXIIhMvUW79NPSSr5qHbm8Q8nxY
yeJ8+BibBiwq2944ruZdek/4tpAqyMnG0RsyzkXQpRg fLU4Shu/luX9gLrJDM8rY+HRpHuuLKJAz0BSiLfXkj8
--- QhDkZSHLpgsvAUk5YhkhD8MNNX6Vlj7CWeQfJ6oEmk0 -> ssh-ed25519 IY5FSQ FJGXPcN7XjZTl3zc8iLSmc2IfhHx/xqIqnNz7j0dXGg
|`ŸP!ùá+ôÃg&ói¤;¶šªâlÔNn„Äõ¬¸ç¤ °ü4´kWó§#èƒ<C3A8><C692>±€w D99jvNKh7yzafKB9qzOX6xNjhf3WS4bYBcc91dVX6Ow
--- USWnD/9XEj6tW0aHMZiVK1Guf43b/8wWcsafnVT0+h4
RqÏHª,XHs8ÌÛÔtAbAGI<47>áΤÂ,åÖÝ¥¿è:<G=bFb†ÀTGSGäÊÙ _ ˜

View file

@ -1 +1 @@
{ __arkheon-token_file.publicKeys = (import ../../lib { }).machineKeys; } { __arkheon-token_file.publicKeys = (import ../../keys).machineKeys; }

View file

@ -27,18 +27,6 @@
"url": "https://github.com/RaitoBezarius/arkheon/archive/113724a1a206905e68319676f73d095fcc043a42.tar.gz", "url": "https://github.com/RaitoBezarius/arkheon/archive/113724a1a206905e68319676f73d095fcc043a42.tar.gz",
"hash": "0yh8g020d7z67iqpg7xywk4dxxa64dxa1igd45nb8w653c82w6gq" "hash": "0yh8g020d7z67iqpg7xywk4dxxa64dxa1igd45nb8w653c82w6gq"
}, },
"attic": {
"type": "Git",
"repository": {
"type": "GitHub",
"owner": "zhaofengli",
"repo": "attic"
},
"branch": "main",
"revision": "aec90814a4ecbc40171d57eeef97c5cab4aaa7b4",
"url": "https://github.com/zhaofengli/attic/archive/aec90814a4ecbc40171d57eeef97c5cab4aaa7b4.tar.gz",
"hash": "0dmcy9r9vks4xnfa4y68vjf3fgc4dz1ix4df9rykq3lprr3q4mcx"
},
"cas-eleves": { "cas-eleves": {
"type": "Git", "type": "Git",
"repository": { "repository": {
@ -57,9 +45,9 @@
"url": "https://git.dgnum.eu/DGNum/dgsi.git" "url": "https://git.dgnum.eu/DGNum/dgsi.git"
}, },
"branch": "main", "branch": "main",
"revision": "a88d31541cfd836ba2bd4bb3c8ec8142e4cd8aa2", "revision": "f6fcd90622151e116adedb41f53da0445f1ee387",
"url": null, "url": null,
"hash": "0z31ib1xjdyzpwdnbj4j7r9nb5baiab3nbx0wg55dh2ifkxp2vqb" "hash": "1rrm4j142h2dkphya34hg341xhklrdvqim35jy6g0152a7y1nkk4"
}, },
"disko": { "disko": {
"type": "GitRelease", "type": "GitRelease",
@ -71,10 +59,10 @@
"pre_releases": false, "pre_releases": false,
"version_upper_bound": null, "version_upper_bound": null,
"release_prefix": null, "release_prefix": null,
"version": "v1.7.0", "version": "v1.8.0",
"revision": "e55f9a8678adc02024a4877c2a403e3f6daf24fe", "revision": "624fd86460e482017ed9c3c3c55a3758c06a4e7f",
"url": "https://api.github.com/repos/nix-community/disko/tarball/v1.7.0", "url": "https://api.github.com/repos/nix-community/disko/tarball/v1.8.0",
"hash": "16zjxysjhk3sgd8b4x5mvx9ilnq35z3zfpkv1la33sqkr8xh1amn" "hash": "06ifryv6rw25cz8zda4isczajdgrvcl3aqr145p8njxx5jya2d77"
}, },
"dns.nix": { "dns.nix": {
"type": "GitRelease", "type": "GitRelease",
@ -99,9 +87,9 @@
"repo": "git-hooks.nix" "repo": "git-hooks.nix"
}, },
"branch": "master", "branch": "master",
"revision": "7570de7b9b504cfe92025dd1be797bf546f66528", "revision": "1211305a5b237771e13fcca0c51e60ad47326a9a",
"url": "https://github.com/cachix/git-hooks.nix/archive/7570de7b9b504cfe92025dd1be797bf546f66528.tar.gz", "url": "https://github.com/cachix/git-hooks.nix/archive/1211305a5b237771e13fcca0c51e60ad47326a9a.tar.gz",
"hash": "1snjia7d5x7nqz8j6zgj45fb9kvza86yrhgc8bpjn9b0lc1i88xp" "hash": "1qz8d9g7rhwjk4p2x0rx59alsf0dpjrb6kpzs681gi3rjr685ivq"
}, },
"kadenios": { "kadenios": {
"type": "Git", "type": "Git",
@ -156,9 +144,9 @@
"url": "https://git.lix.systems/lix-project/lix.git" "url": "https://git.lix.systems/lix-project/lix.git"
}, },
"branch": "main", "branch": "main",
"revision": "cc183fdbc14ce105a5661d646983f791978b9d5c", "revision": "ed9b7f4f84fd60ad8618645cc1bae2d686ff0db6",
"url": null, "url": null,
"hash": "1bgh8z445yhv0b46yimr2ic33hplm33xj50ivgsbykdf30xks95n" "hash": "05kxga8fs9h4qm0yvp5l7jvsda7hzqs7rvxcn8r52dqg3c80hva9"
}, },
"lix-module": { "lix-module": {
"type": "Git", "type": "Git",
@ -167,9 +155,9 @@
"url": "https://git.lix.systems/lix-project/nixos-module.git" "url": "https://git.lix.systems/lix-project/nixos-module.git"
}, },
"branch": "main", "branch": "main",
"revision": "353b25f0b6da5ede15206d416345a2ec4195b5c8", "revision": "fd186f535a4ac7ae35d98c1dd5d79f0a81b7976d",
"url": null, "url": null,
"hash": "0aq9l1qhz01wm232gskq2mywik98zv2r8qn42bjw3kdb185wf9kl" "hash": "0jxpqaz12lqibg03iv36sa0shfvamn2yhg937llv3kl4csijd34f"
}, },
"lon": { "lon": {
"type": "Git", "type": "Git",
@ -194,19 +182,17 @@
"url": null, "url": null,
"hash": "0m9il1lllw59a6l9vwfi1bika7g4pxs20clc48kklpflnk0scb1f" "hash": "0m9il1lllw59a6l9vwfi1bika7g4pxs20clc48kklpflnk0scb1f"
}, },
"nix-lib": { "microvm.nix": {
"type": "GitRelease", "type": "Git",
"repository": { "repository": {
"type": "Git", "type": "GitHub",
"url": "https://git.hubrecht.ovh/hubrecht/nix-lib" "owner": "RaitoBezarius",
"repo": "microvm.nix"
}, },
"pre_releases": false, "branch": "main",
"version_upper_bound": null, "revision": "49899c9a4fdf75320785e79709bf1608c34caeb8",
"release_prefix": null, "url": "https://github.com/RaitoBezarius/microvm.nix/archive/49899c9a4fdf75320785e79709bf1608c34caeb8.tar.gz",
"version": "0.1.6", "hash": "0sz6azdpiz4bd36x23bcdhx6mwyqj8zl5cczjgv48xqfmysy8zwy"
"revision": "ffb3dfa4c146d48300bd4fa625acfe48e091a734",
"url": null,
"hash": "1frsja071qqx6p7rjnijzhidqfylx0ipzqpmjdvj4jl89h34vrhr"
}, },
"nix-modules": { "nix-modules": {
"type": "Git", "type": "Git",
@ -215,9 +201,9 @@
"url": "https://git.hubrecht.ovh/hubrecht/nix-modules.git" "url": "https://git.hubrecht.ovh/hubrecht/nix-modules.git"
}, },
"branch": "main", "branch": "main",
"revision": "32e76ee64352587663766e1a3945a6fe0917e35d", "revision": "2fd7c7810b2a901020ddd2d0cc82810b83a313fc",
"url": null, "url": null,
"hash": "16vnpnby6s174y4nzb26z2pc49ba7lw7vpf6r7p4dqci92b0yg5j" "hash": "0rag870ll745r5isnk6hlxv0b0sbgriba5k6nihahcwsal2f4830"
}, },
"nix-patches": { "nix-patches": {
"type": "GitRelease", "type": "GitRelease",
@ -240,9 +226,9 @@
"url": "https://git.hubrecht.ovh/hubrecht/nix-pkgs" "url": "https://git.hubrecht.ovh/hubrecht/nix-pkgs"
}, },
"branch": "main", "branch": "main",
"revision": "f3a79c8038b8847a0c93381db2b744b3153a0201", "revision": "3e731378f3984313ef902c5e5a49e002e6e2c27e",
"url": null, "url": null,
"hash": "1l7xd5s7ycwnnmb3kn12ysc4kqnvg1p4g60sfndqc8q944wxmpab" "hash": "1vy2dj9fyy653w6idvi1r73s0nd2a332a1xkppddjip6rk0i030p"
}, },
"nixos-23.11": { "nixos-23.11": {
"type": "Channel", "type": "Channel",
@ -253,8 +239,8 @@
"nixos-24.05": { "nixos-24.05": {
"type": "Channel", "type": "Channel",
"name": "nixos-24.05", "name": "nixos-24.05",
"url": "https://releases.nixos.org/nixos/24.05/nixos-24.05.4798.f4c846aee8e1/nixexprs.tar.xz", "url": "https://releases.nixos.org/nixos/24.05/nixos-24.05.5518.ecbc1ca8ffd6/nixexprs.tar.xz",
"hash": "0i08jxfa55ifpdmcwg2isgszprxaikjalinmcqjfzk336hzvh7if" "hash": "1yr2v17d8jg9567rvadv62bpr6i47fp73by2454yjxh1m9ric2cm"
}, },
"nixos-generators": { "nixos-generators": {
"type": "Git", "type": "Git",
@ -264,21 +250,33 @@
"repo": "nixos-generators" "repo": "nixos-generators"
}, },
"branch": "master", "branch": "master",
"revision": "214efbd73241d72a8f48b8b9a73bb54895cd51a7", "revision": "9ae128172f823956e54947fe471bc6dfa670ecb4",
"url": "https://github.com/nix-community/nixos-generators/archive/214efbd73241d72a8f48b8b9a73bb54895cd51a7.tar.gz", "url": "https://github.com/nix-community/nixos-generators/archive/9ae128172f823956e54947fe471bc6dfa670ecb4.tar.gz",
"hash": "00cavr7wlaa6mc16245gn5d5bq7y67fg7l4bgkx3q5109jay1837" "hash": "1zn3lykymimzh21q4fixw6ql42n8j82dqwm5axifhcnl8dsdgrvr"
}, },
"nixos-unstable": { "nixos-unstable": {
"type": "Channel", "type": "Channel",
"name": "nixos-unstable", "name": "nixos-unstable",
"url": "https://releases.nixos.org/nixos/unstable/nixos-24.11pre677397.574d1eac1c20/nixexprs.tar.xz", "url": "https://releases.nixos.org/nixos/unstable/nixos-24.11pre688563.bc947f541ae5/nixexprs.tar.xz",
"hash": "0j66kv4xq4csa5hwizlab5a7j47hd44182xvz541ll3cdfd5a7gx" "hash": "1jsaxwi128fiach3dj8rdj5agqivsr4sidb8lmdnl7g07fl9x0kj"
}, },
"nixpkgs": { "nixpkgs": {
"type": "Channel", "type": "Channel",
"name": "nixpkgs-unstable", "name": "nixpkgs-unstable",
"url": "https://releases.nixos.org/nixpkgs/nixpkgs-24.11pre678893.5775c2583f18/nixexprs.tar.xz", "url": "https://releases.nixos.org/nixpkgs/nixpkgs-24.11pre689466.7d49afd36b55/nixexprs.tar.xz",
"hash": "09r3fc2xk4nxzhmkn7wvk99i8qibrhh6lhd3mz6iz64imj1k5r9r" "hash": "0r4zb6j8in4dk7gxciapfm49dqbdd0c7ajjzj9iy2xrrj5aj32qp"
},
"proxmox-nixos": {
"type": "Git",
"repository": {
"type": "GitHub",
"owner": "SaumonNet",
"repo": "proxmox-nixos"
},
"branch": "main",
"revision": "7869ffc2e0db36f314fb60f1ab0087b760700b00",
"url": "https://github.com/SaumonNet/proxmox-nixos/archive/7869ffc2e0db36f314fb60f1ab0087b760700b00.tar.gz",
"hash": "0cam36s3ar366y41rvihjqghkdjl9a1n1wzym8p2mkar1r9x7haj"
}, },
"signal-irc-bridge": { "signal-irc-bridge": {
"type": "Git", "type": "Git",
@ -287,9 +285,9 @@
"url": "https://git.dgnum.eu/mdebray/signal-irc-bridge" "url": "https://git.dgnum.eu/mdebray/signal-irc-bridge"
}, },
"branch": "master", "branch": "master",
"revision": "688a5c324e032f7716aa69fb7097971fa26bed1d", "revision": "9123e6fbe5cdc2d2ae16579d989d45398232f74c",
"url": null, "url": null,
"hash": "153mb2m3ap3v3y1inygqic551vawz1i08pbx2v1viaind3nd2l6m" "hash": "15p61k0ylri7bbqz4vsy8rmhy62va4yd8cjiwm4lb0gvgbcbkdr2"
}, },
"stateless-uptime-kuma": { "stateless-uptime-kuma": {
"type": "Git", "type": "Git",
@ -310,9 +308,9 @@
"server": "https://git.helsinki.tools/" "server": "https://git.helsinki.tools/"
}, },
"branch": "master", "branch": "master",
"revision": "a1c485d16f0df1f55634787b63961846288b3d31", "revision": "4c47608f349dd45e4895e1f61f19ad9e8dfcc0bf",
"url": "https://git.helsinki.tools/api/v4/projects/helsinki-systems%2Fwp4nix/repository/archive.tar.gz?sha=a1c485d16f0df1f55634787b63961846288b3d31", "url": "https://git.helsinki.tools/api/v4/projects/helsinki-systems%2Fwp4nix/repository/archive.tar.gz?sha=4c47608f349dd45e4895e1f61f19ad9e8dfcc0bf",
"hash": "09xmhv821x2w704lbg43ayr83ycb0rvqfh6fq0c9l4x9v23wv9cw" "hash": "1pnjhbljihf2ras9lbp1f6izzxghccfygkkf2ikkahjr1vbicdbq"
} }
}, },
"version": 3 "version": 3

View file

@ -0,0 +1,54 @@
From 4d6e57d2d577cc105c9e0cd397408e9e3ce85cd0 Mon Sep 17 00:00:00 2001
From: Raito Bezarius <masterancpp@gmail.com>
Date: Tue, 8 Oct 2024 16:33:14 +0200
Subject: [PATCH] fix(packaging): correctness of the build top directory
It was using /build which is an implementation detail and not
guaranteed.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
---
pkgs/pve-container/default.nix | 6 +++---
pkgs/pve-rs/default.nix | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/pkgs/pve-container/default.nix b/pkgs/pve-container/default.nix
index 445c271..5633c0f 100644
--- a/pkgs/pve-container/default.nix
+++ b/pkgs/pve-container/default.nix
@@ -30,7 +30,7 @@ perl536.pkgs.toPerlModule (
postPatch = ''
sed -i Makefile \
-e "s/pct.1 pct.conf.5 pct.bash-completion pct.zsh-completion //" \
- -e "s,/usr/share/lxc,/build/lxc," \
+ -e "s,/usr/share/lxc,$NIX_BUILD_TOP/lxc," \
-e "/pve-doc-generator/d" \
-e "/PVE_GENERATING_DOCS/d" \
-e "/SERVICEDIR/d" \
@@ -45,8 +45,8 @@ perl536.pkgs.toPerlModule (
dontPatchShebangs = true;
postConfigure = ''
- cp -r ${lxc}/share/lxc /build
- chmod -R +w /build/lxc
+ cp -r ${lxc}/share/lxc $NIX_BUILD_TOP/
+ chmod -R +w $NIX_BUILD_TOP/lxc
'';
makeFlags = [
diff --git a/pkgs/pve-rs/default.nix b/pkgs/pve-rs/default.nix
index c024287..881beab 100644
--- a/pkgs/pve-rs/default.nix
+++ b/pkgs/pve-rs/default.nix
@@ -57,7 +57,7 @@ perl536.pkgs.toPerlModule (
];
makeFlags = [
- "BUILDIR=/build"
+ "BUILDIR=$NIX_BUILD_TOP"
"BUILD_MODE=release"
"DESTDIR=$(out)"
"GITVERSION:=${src.rev}"
--
2.46.0

View file

@ -126,4 +126,11 @@ in
hash = "sha256-SgHhW9HCkDQsxT3eG4P9q68c43e3sbDHRY9qs7oSt8o="; hash = "sha256-SgHhW9HCkDQsxT3eG4P9q68c43e3sbDHRY9qs7oSt8o=";
} }
]; ];
"proxmox-nixos" = [
{
_type = "static";
path = ./05-pmnos-correctness-build-directory.patch;
}
];
} }

20
scripts/cache-node.sh Normal file
View file

@ -0,0 +1,20 @@
set -eu -o pipefail
cat <<EOF >.netrc
default
login $STORE_USER
password $STORE_PASSWORD
EOF
drv=$("@colmena@/bin/colmena" eval --instantiate -E "{ nodes, ... }: nodes.${BUILD_NODE}.config.system.build.toplevel")
# Build the derivation and send it to the great beyond
nix-store --query --requisites --force-realise --include-outputs "$drv" | grep -v '.*\.drv' >paths.txt
nix copy \
--extra-experimental-features nix-command \
--to "$STORE_ENDPOINT?compression=none" \
--netrc-file .netrc \
"$(nix-store --realise "$drv")"
rm .netrc

View file

@ -1,12 +0,0 @@
ENDPOINT=${ATTIC_ENDPOINT:-https://cachix.dgnum.eu}
if [ "$1" == "off" ]; then
echo "Please edit $XDG_CONFIG_HOME/nix/nix.conf to remove the cache"
elif [ "$1" == "on" ]; then
@attic@/bin/attic login dgnum "$ENDPOINT"
@attic@/bin/attic use dgnum:infra
else
echo "Help:"
echo " cache {on|off}"
fi

View file

@ -10,7 +10,6 @@ let
git git
jq jq
; ;
attic = pkgs.attic-client;
}; };
mkShellScript = mkShellScript =
@ -30,12 +29,10 @@ let
)); ));
scripts = [ scripts = [
"cache-node"
"check-deployment" "check-deployment"
"launch-vm" "launch-vm"
"list-nodes" "list-nodes"
"push-to-cache"
"push-to-nix-cache"
"cache"
]; ];
in in

View file

@ -1,13 +0,0 @@
set -e
set -u
set -o pipefail
ENDPOINT=${ATTIC_ENDPOINT:-https://cachix.dgnum.eu}
@attic@/bin/attic login dgnum "$ENDPOINT" "$ATTIC_TOKEN"
@colmena@/bin/colmena eval -E '{ nodes, lib, ... }: lib.mapAttrsToList (_: v: v.config.system.build.toplevel.drvPath) nodes' |\
@jq@/bin/jq -r '.[]' |\
xargs -n 10 nix-store -q -R --include-outputs |\
sed '/\.drv$/d' |\
xargs @attic@/bin/attic push dgnum:infra

View file

@ -1,20 +0,0 @@
set -e
set -u
set -o pipefail
ENDPOINT=${STORE_ENDPOINT:-https://tvix-cache.dgnum.eu/infra-singing/}
cat > .netrc << EOF
default
login $STORE_USER
password $STORE_PASSWORD
EOF
@colmena@/bin/colmena eval -E "{ nodes, lib, ... }: builtins.map (v: nodes.\${v}.config.system.build.toplevel.drvPath) ${NODES:-(builtins.attrNames nodes)}" |\
@jq@/bin/jq -r '.[]' |\
xargs nix-store -q -R --include-outputs |\
sed '/\.drv$/d' |\
tee uploaded.txt |\
xargs nix copy --to "$ENDPOINT?compression=none" --extra-experimental-features nix-command --netrc-file ./.netrc
rm .netrc