Tom Hubrecht
30daeb5edc
All checks were successful
Check meta / check_dns (pull_request) Successful in 18s
Check meta / check_meta (pull_request) Successful in 17s
Check workflows / check_workflows (pull_request) Successful in 19s
Build all the nodes / ap01 (pull_request) Successful in 1m8s
Build all the nodes / bridge01 (pull_request) Successful in 1m56s
Build all the nodes / geo01 (pull_request) Successful in 1m56s
Build all the nodes / geo02 (pull_request) Successful in 1m46s
Build all the nodes / hypervisor01 (pull_request) Successful in 1m50s
Build all the nodes / netcore02 (pull_request) Successful in 26s
Build all the nodes / hypervisor02 (pull_request) Successful in 1m35s
Build all the nodes / compute01 (pull_request) Successful in 2m51s
Build all the nodes / hypervisor03 (pull_request) Successful in 1m44s
Build all the nodes / rescue01 (pull_request) Successful in 2m5s
Build all the nodes / storage01 (pull_request) Successful in 1m56s
Build all the nodes / vault01 (pull_request) Successful in 1m51s
Build all the nodes / web02 (pull_request) Successful in 1m42s
Build the shell / build-shell (pull_request) Successful in 31s
Run pre-commit on all files / pre-commit (pull_request) Successful in 37s
Build all the nodes / web01 (pull_request) Successful in 2m21s
Build all the nodes / web03 (pull_request) Successful in 1m37s
Check meta / check_dns (push) Successful in 26s
Check meta / check_meta (push) Successful in 25s
Build all the nodes / ap01 (push) Successful in 1m3s
Build all the nodes / bridge01 (push) Successful in 1m46s
Build all the nodes / geo01 (push) Successful in 1m38s
Build all the nodes / geo02 (push) Successful in 1m38s
Build all the nodes / compute01 (push) Successful in 2m40s
Build all the nodes / netcore02 (push) Successful in 26s
Build all the nodes / hypervisor01 (push) Successful in 1m31s
Build all the nodes / hypervisor02 (push) Successful in 1m37s
Build all the nodes / hypervisor03 (push) Successful in 1m32s
Build all the nodes / rescue01 (push) Successful in 1m49s
Build all the nodes / storage01 (push) Successful in 1m54s
Build all the nodes / vault01 (push) Successful in 1m55s
Build the shell / build-shell (push) Successful in 30s
Build all the nodes / web01 (push) Successful in 2m17s
Build all the nodes / web02 (push) Successful in 1m35s
Run pre-commit on all files / pre-commit (push) Successful in 35s
Build all the nodes / web03 (push) Successful in 1m38s
|
||
---|---|---|
.. | ||
nodes | ||
default.nix | ||
dns.nix | ||
network.nix | ||
nixpkgs.nix | ||
options.nix | ||
organization.nix | ||
README.md | ||
verify.nix |
Metadata of the DGNum infrastructure
DNS
The DNS configuration of our infrastructure is completely defined with the metadata contained in this folder.
The different machines have records pointing to their IP addresses when they exist:
-
$node.$site.infra.dgnum.eu points IN A $ipv4
-
$node.$site.infra.dgnum.eu points IN AAAA $ipv6
-
v4.$node.$site.infra.dgnum.eu points IN A $ipv4
-
v6.$node.$site.infra.dgnum.eu points IN AAAA $ipv6
Then the services hosted on those machines can be accessed through redirections:
- $service.dgnum.eu IN CNAME $node.$site.infra.dgnum.eu
or, when targeting only a specific IP protocol:
- $service4.dgnum.eu IN CNAME ipv4.$node.$site.infra.dgnum.eu
- $service6.dgnum.eu IN CNAME ipv6.$node.$site.infra.dgnum.eu
Extra records exist for ns, mail configuration, or the main website but shouldn't change or be tinkered with.
Network
The network configuration (except the NetBird vpn) is defined statically.
TODO.
Nixpkgs
Machines can use different versions of NixOS, the supported ones are specified here.
How to add a new version
- Switch to a new branch
nixos-$VERSION
- Run the following command
npins add channel nixos-$VERSION
- Edit
meta/nixpkgs.nix
and add$VERSION
to the supported version. - Read the release notes and check for changes.
- Update the nodes versions
- Create a PR so that the CI check that it builds
Nodes
The nodes are declared statically, several options can be configured:
deployment
, the colmena deployment optionstateVersion
, the state version of the nodenixpkgs
, the version and sytem of NixOS to useadmins
, the list of administrators specific to this node, they will be given root accessadminGroups
, a list of groups whose members will be added toadmins
site
, the physical location of the nodevm-cluster
, the VM cluster hosting the node when appropriate
Some options are set automatically, for example:
deployment.targetHost
will be inferred from the network configurationdeployment.tags
will containinfra-$site
, so that a full site can be redeployed at once
Organization
The organization defines the groups and members of the infrastructure team, one day this information will be synchronized in Kanidm.
Members
For a member to be allowed access to a node, they must be defined in the members
attribute set,
and their SSH keys must be available in the keys folder.
Groups
Groups exist only to simplify the management of accesses:
- The
root
group will be given administrator access on all nodes - The
iso
group will have its keys included in the ISOs built from the iso folder
Extra groups can be created at will, to be used in node-specific modules.
Module
The meta configuration can be evaluated as a module, to perform checks on the structure of the data.