All checks were successful
Build all the nodes / storage01 (pull_request) Successful in 2m5s
Check meta / check_dns (pull_request) Successful in 16s
Check meta / check_meta (pull_request) Successful in 16s
Check workflows / check_workflows (pull_request) Successful in 16s
Build all the nodes / Jaccess04 (pull_request) Successful in 22s
Build all the nodes / Jaccess01 (pull_request) Successful in 25s
Build all the nodes / ap01 (pull_request) Successful in 43s
Build all the nodes / bridge01 (pull_request) Successful in 48s
Build all the nodes / cof02 (pull_request) Successful in 50s
Build all the nodes / build01 (pull_request) Successful in 58s
Build all the nodes / geo02 (pull_request) Successful in 50s
Build all the nodes / geo01 (pull_request) Successful in 52s
Build all the nodes / hypervisor01 (pull_request) Successful in 51s
Build all the nodes / hypervisor02 (pull_request) Successful in 46s
Build all the nodes / netcore02 (pull_request) Successful in 23s
Build all the nodes / netcore01 (pull_request) Successful in 27s
Build all the nodes / compute01 (pull_request) Successful in 1m18s
Build all the nodes / hypervisor03 (pull_request) Successful in 54s
Build all the nodes / lab-router01 (pull_request) Successful in 48s
Run pre-commit on all files / pre-commit (pull_request) Successful in 40s
Build all the nodes / iso (pull_request) Successful in 59s
Run pre-commit on all files / pre-commit (push) Successful in 41s
Build the shell / build-shell (pull_request) Successful in 26s
Build all the nodes / tower01 (pull_request) Successful in 52s
Build all the nodes / web02 (pull_request) Successful in 51s
Build all the nodes / vault01 (pull_request) Successful in 1m4s
Build all the nodes / zulip01 (pull_request) Successful in 1m2s
Build all the nodes / krz01 (pull_request) Successful in 1m39s
Build all the nodes / web03 (pull_request) Successful in 1m7s
Build all the nodes / rescue01 (pull_request) Successful in 1m24s
Build all the nodes / web01 (pull_request) Successful in 1m18s
91c96a414e14835b84adbf775f793739a5851fab → 48f39fbe2e8f90f9ac160dd4b6929f3ac06d8223 Last 50 commits: 48f39fb Merge pull request #65 from xddxdd/declarative-bridge 4f2c493 Add option to declaratively configure visible bridges b3483bb Update cache key 0d4d626 Merge pull request #162 from SaumonNet/newcache b900a4b Merge pull request #161 from codgician/evergreen 67c5fc9 pve-manager: 8.2.10 -> 8.3.5 a9a9a07 proxmox-i18n: init at 3.2.4 5ce11fd pve-common: 8.2.9 -> 8.3.1 2b7758e pve-guest-common: 5.1.6 -> 5.1.7 2567deb pve-access-control: 8.2.0 -> 8.2.2 8fb3952 pve-common: 8.2.1 -> 8.2.9 483faa6 pve-manager: 8.2.4 -> 8.2.10 89a8387 Merge pull request #118 from proxmox-update/auto-update/linstor-proxmox b94cd64 fix: cache key c7740e7 Merge pull request #160 from SaumonNet/firstvmid 32e4599 Merge pull request #114 from proxmox-update/auto-update/markedjs 31468f3 Merge pull request #145 from proxmox-update/auto-update/pve-container 22ef831 Merge pull request #156 from proxmox-update/auto-update/pve-ha-manager 204532e Merge pull request #159 from SaumonNet/backup-qemu-hash-stability 99187f3 fix: nixmoxer: find first vmid 87ad0ba proxmox-backup-qemu: remove leaveDotGit 1cf98ce Merge pull request #158 from SaumonNet/autoinstall b3c4a99 Merge pull request #147 from proxmox-update/auto-update/uuid 8e3c6a4 Merge pull request #143 from proxmox-update/auto-update/testharness 5a84f32 fix: force iso name 7be8021 pve-ha-manager: 4.0.6 -> 4.0.7 de68467 uuid: 0.36 -> 0.37 e078b6c markedjs: 15.0.4 -> 15.0.12 f7ca9bc pve-container: 5.2.2 -> 5.2.6 55113bb testharness: 3.50 -> 3.52 292e513 linstor-proxmox: 8.0.4 -> 8.1.1 bb52ad5 Merge pull request #121 from codgician/qemu-update-script 9937276 re-enable linstor test a30672b pve-qemu-server: ensure bootsplash.jpg is copied 051f0df pve-qemu-server: fix qemu version check 1ff2292 pve-common: migrate substituteAll to replaceVars as the former will be removed in 25.11 1d339bd linstor-server: fix hash of source 26bb504 proxmox-backup-qemu: fix hash of source 6c52096 update to 25.05 dda2c5b pve-qemu-server: 8.2.1 -> 8.3.8 92d03a3 pve-http-server: 5.1.2 -> 5.2.2 7df1e81 linstor-server: use protobuf_24 b209161 pve-qemu: 9.1.2 -> 9.2.0-5 60e72a8 Merge pull request #139 from codgician/init-sencha-touch c1d7666 Merge pull request #141 from SaumonNet/rework-rust d65932f remove useless Cargo.toml 70a187f termproxy: 1.0.1->1.1.0 ff142d2 perlmod: update 07a4fd4 drop proxmox-rs 28608b6 proxmox-backup-qemu: 1.3.2->1.5.1 |
||
---|---|---|
.forgejo/workflows | ||
lib | ||
LICENSES | ||
machines | ||
meta | ||
modules | ||
patches | ||
pkgs | ||
scripts | ||
workflows | ||
.envrc | ||
.gitattributes | ||
.gitignore | ||
bootstrap.nix | ||
CONTRIBUTE.md | ||
default.nix | ||
hive.nix | ||
keys.nix | ||
lon.lock | ||
lon.nix | ||
README.md | ||
REUSE.toml | ||
shell.nix |
❄️ infrastructure
The dgnum infrastructure.
Contributing
Some instruction on how to contribute are available (in french) in /CONTRIBUTE.md. You're expected to read this document before commiting to the repo.
Some documentation for the development tools are provided in the aforementioned file.
Using the binary cache
Add the following module to your configuration (and pin this repo using your favorite tool: npins, lon, etc...):
{ lib, ... }:
let
dgnum-infra = PINNED_PATH_TO_INFRA;
in {
nix.settings = (import dgnum-infra { }).mkCacheSettings {
caches = [ "infra" ];
};
}
Adding a new machine
The first step is to create a minimal viable NixOS host, using tha means necessary. The second step is to find a name for this host, it must be unique from the other hosts.
Tip
For the rest of this part, we assume that the host is named
host02
Download the keys
The public SSH keys of host02
have to be saved to keys
, preferably only the ssh-ed25519
one.
It can be retreived with :
ssh-keyscan address.of.host02 2>/dev/null | awk '/ssh-ed25519/ {print $2,$3}'
Initialize the machine folder and configuration
- Create a folder
host02
undermachines/
- Copy the hardware configuration file generated by
nixos-generate-config
tomachines/host02/_hardware-configuration.nix
- Create a
machines/host02/_configuration.nix
file, it will contain the main configuration options, the basic content of this file should be the following
{ lib, ... }:
lib.extra.mkConfig {
enabledModules = [
# List of modules to enable
];
enabledServices = [
# List of services to enable
];
extraConfig = {
services.netbird.enable = true;
};
root = ./.;
}
Fill in the metadata
Network configuration
The network is declared in meta/network.nix
, the necessary hostId
value can be generated with :
head -c4 /dev/urandom | od -A none -t x4 | sed 's/ //'
Other details
The general metadata is declared in meta/nodes.nix
, the main values to declare are :
site
, where the node is physically locatedstateVersion
nixpkgs
, the nixpkgs version to use
Initialize secrets
Create the directory secrets
in the configuration folder, and add a secrets.nix
file containing :
(import ../../../keys.nix).mkSecrets [ "host02" ] [
# List of secrets for host02
]
This will be used for future secret management.
Update encrypted files
Both the Arkheon, Netbox and notification modules have secrets that are deployed on all machines. To make those services work correctly, run in modules/dgn-records
, modules/dgn-netbox-agent
and modules/dgn-notify
:
agenix -r
Commit and create a PR
Once all of this is done, check that the configuration builds correctly :
colmena build --on host02
Apply it, and create a Pull Request.