infrastructure/meta
sinavir aed96b16e1
All checks were successful
Check meta / check_dns (pull_request) Successful in 16s
Check meta / check_meta (pull_request) Successful in 16s
Check workflows / check_workflows (pull_request) Successful in 17s
Build all the nodes / ap01 (pull_request) Successful in 32s
Build all the nodes / netaccess01 (pull_request) Successful in 20s
Build all the nodes / netcore00 (pull_request) Successful in 20s
Build all the nodes / bridge01 (pull_request) Successful in 52s
Build all the nodes / build01 (pull_request) Successful in 56s
Build all the nodes / geo01 (pull_request) Successful in 54s
Build all the nodes / netcore01 (pull_request) Successful in 24s
Build all the nodes / geo02 (pull_request) Successful in 56s
Build all the nodes / cof02 (pull_request) Successful in 1m5s
Build all the nodes / hypervisor01 (pull_request) Successful in 58s
Build all the nodes / hypervisor02 (pull_request) Successful in 58s
Build all the nodes / netcore02 (pull_request) Successful in 28s
Build all the nodes / compute01 (pull_request) Successful in 1m25s
Run pre-commit on all files / pre-commit (pull_request) Successful in 26s
Build the shell / build-shell (pull_request) Successful in 30s
Build all the nodes / hypervisor03 (pull_request) Successful in 1m30s
Build all the nodes / rescue01 (pull_request) Successful in 1m2s
Build all the nodes / storage01 (pull_request) Successful in 1m3s
Build all the nodes / web03 (pull_request) Successful in 1m2s
Build all the nodes / vault01 (pull_request) Successful in 1m5s
Build all the nodes / tower01 (pull_request) Successful in 1m45s
Build all the nodes / web02 (pull_request) Successful in 1m45s
Build all the nodes / web01 (pull_request) Successful in 2m24s
Check meta / check_meta (push) Successful in 16s
Check meta / check_dns (push) Successful in 18s
Build all the nodes / netcore01 (push) Successful in 22s
Build all the nodes / netaccess01 (push) Successful in 22s
Build all the nodes / netcore00 (push) Successful in 21s
Build all the nodes / ap01 (push) Successful in 31s
Build all the nodes / netcore02 (push) Successful in 20s
Build all the nodes / bridge01 (push) Successful in 55s
Build all the nodes / build01 (push) Successful in 56s
Build all the nodes / hypervisor02 (push) Successful in 1m4s
Build all the nodes / hypervisor03 (push) Successful in 1m4s
Build all the nodes / hypervisor01 (push) Successful in 1m5s
Build all the nodes / geo02 (push) Successful in 1m5s
Build all the nodes / geo01 (push) Successful in 1m6s
Build all the nodes / storage01 (push) Successful in 55s
Build all the nodes / tower01 (push) Successful in 55s
Build all the nodes / compute01 (push) Successful in 1m31s
Run pre-commit on all files / pre-commit (push) Successful in 25s
Build the shell / build-shell (push) Successful in 31s
Build all the nodes / vault01 (push) Successful in 1m7s
Build all the nodes / cof02 (push) Successful in 1m40s
Build all the nodes / web02 (push) Successful in 56s
Build all the nodes / web03 (push) Successful in 58s
Build all the nodes / rescue01 (push) Successful in 1m40s
Build all the nodes / web01 (push) Successful in 1m32s
feat(monitoring): drop prometheus in favor of victorialogs
2025-04-01 17:04:54 +02:00
..
nodes feat(netconf): init netcore00 2025-03-27 18:21:17 +01:00
organization chore(catvayor/keys): add less secured key for builder 2025-03-09 20:37:14 +01:00
default.nix chore(meta): Use mkImports for the module list 2025-02-11 10:31:37 +01:00
dns.nix feat(monitoring): drop prometheus in favor of victorialogs 2025-04-01 17:04:54 +02:00
network.nix feat(cof02): init cof staging vm 2025-03-10 10:19:04 +01:00
nixpkgs.nix feat(netconf)!: wip! broken! netconf-eval 2024-12-16 09:26:52 +01:00
options.nix chore(meta/options): Add comments 2025-02-11 10:31:37 +01:00
README.md chore: Add license and copyright information 2024-12-13 12:41:38 +01:00
verify.nix chore(bootstrap): Rename file 2025-02-06 13:08:04 +01:00

Metadata of the DGNum infrastructure

DNS

The DNS configuration of our infrastructure is completely defined with the metadata contained in this folder.

The different machines have records pointing to their IP addresses when they exist:

  • $node.$site.infra.dgnum.eu points IN A $ipv4

  • $node.$site.infra.dgnum.eu points IN AAAA $ipv6

  • v4.$node.$site.infra.dgnum.eu points IN A $ipv4

  • v6.$node.$site.infra.dgnum.eu points IN AAAA $ipv6

Then the services hosted on those machines can be accessed through redirections:

  • $service.dgnum.eu IN CNAME $node.$site.infra.dgnum.eu

or, when targeting only a specific IP protocol:

  • $service4.dgnum.eu IN CNAME ipv4.$node.$site.infra.dgnum.eu
  • $service6.dgnum.eu IN CNAME ipv6.$node.$site.infra.dgnum.eu

Extra records exist for ns, mail configuration, or the main website but shouldn't change or be tinkered with.

Network

The network configuration (except the NetBird vpn) is defined statically.

TODO.

Nixpkgs

Machines can use different versions of NixOS, the supported ones are specified here.

How to add a new version

  • Switch to a new branch nixos-$VERSION
  • Run the following command
npins add channel nixos-$VERSION
  • Edit meta/nixpkgs.nix and add $VERSION to the supported version.
  • Read the release notes and check for changes.
  • Update the nodes versions
  • Create a PR so that the CI check that it builds

Nodes

The nodes are declared statically, several options can be configured:

  • deployment, the colmena deployment option
  • stateVersion, the state version of the node
  • nixpkgs, the version and sytem of NixOS to use
  • admins, the list of administrators specific to this node, they will be given root access
  • adminGroups, a list of groups whose members will be added to admins
  • site, the physical location of the node
  • vm-cluster, the VM cluster hosting the node when appropriate

Some options are set automatically, for example:

  • deployment.targetHost will be inferred from the network configuration
  • deployment.tags will contain infra-$site, so that a full site can be redeployed at once

Organization

The organization defines the groups and members of the infrastructure team, one day this information will be synchronized in Kanidm.

Members

For a member to be allowed access to a node, they must be defined in the members attribute set, and their SSH keys must be available in the keys folder.

Groups

Groups exist only to simplify the management of accesses:

  • The root group will be given administrator access on all nodes
  • The iso group will have its keys included in the ISOs built from the iso folder

Extra groups can be created at will, to be used in node-specific modules.

Module

The meta configuration can be evaluated as a module, to perform checks on the structure of the data.