Tom Hubrecht
812758447b
All checks were successful
Check meta / check_meta (push) Successful in 25s
Check meta / check_dns (push) Successful in 44s
build configuration / build_vault01 (push) Successful in 1m0s
build configuration / build_storage01 (push) Successful in 1m5s
build configuration / build_compute01 (push) Successful in 1m11s
lint / check (push) Successful in 23s
build configuration / build_web02 (push) Successful in 47s
build configuration / build_web01 (push) Successful in 1m18s
build configuration / build_rescue01 (push) Successful in 47s
75 lines
2.5 KiB
Markdown
75 lines
2.5 KiB
Markdown
Metadata of the DGNum infrastructure
|
|
====================================
|
|
|
|
# DNS
|
|
|
|
The DNS configuration of our infrastructure is completely defined with the metadata contained in this folder.
|
|
|
|
The different machines have records pointing to their IP addresses when they exist:
|
|
|
|
- $node.$site.infra.dgnum.eu points IN A $ipv4
|
|
- $node.$site.infra.dgnum.eu points IN AAAA $ipv6
|
|
|
|
- v4.$node.$site.infra.dgnum.eu points IN A $ipv4
|
|
- v6.$node.$site.infra.dgnum.eu points IN AAAA $ipv6
|
|
|
|
Then the services hosted on those machines can be accessed through redirections:
|
|
|
|
- $service.dgnum.eu IN CNAME $node.$site.infra.dgnum.eu
|
|
|
|
or, when targeting only a specific IP protocol:
|
|
|
|
- $service4.dgnum.eu IN CNAME ipv4.$node.$site.infra.dgnum.eu
|
|
- $service6.dgnum.eu IN CNAME ipv6.$node.$site.infra.dgnum.eu
|
|
|
|
Extra records exist for ns, mail configuration, or the main website but shouldn't change or be tinkered with.
|
|
|
|
# Network
|
|
|
|
The network configuration (except the NetBird vpn) is defined statically.
|
|
|
|
TODO.
|
|
|
|
# Nixpkgs
|
|
|
|
Machines can use different versions of NixOS, the supported and default ones are specified here.
|
|
|
|
# Nodes
|
|
|
|
The nodes are declared statically, several options can be configured:
|
|
|
|
- `deployment`, the colmena deployment option
|
|
- `stateVersion`, the state version of the node
|
|
- `nixpkgs`, the version of NixOS to use
|
|
- `admins`, the list of administrators specific to this node, they will be given root access
|
|
- `adminGroups`, a list of groups whose members will be added to `admins`
|
|
- `site`, the physical location of the node
|
|
- `vm-cluster`, the VM cluster hosting the node when appropriate
|
|
|
|
Some options are set automatically, for example:
|
|
|
|
- `deployment.targetHost` will be inferred from the network configuration
|
|
- `deployment.tags` will contain `infra-$site`, so that a full site can be redeployed at once
|
|
|
|
# Organization
|
|
|
|
The organization defines the groups and members of the infrastructure team,
|
|
one day this information will be synchronized in Kanidm.
|
|
|
|
## Members
|
|
|
|
For a member to be allowed access to a node, they must be defined in the `members` attribute set,
|
|
and their SSH keys must be available in the keys folder.
|
|
|
|
## Groups
|
|
|
|
Groups exist only to simplify the management of accesses:
|
|
|
|
- The `root` group will be given administrator access on all nodes
|
|
- The `iso` group will have its keys included in the ISOs built from the iso folder
|
|
|
|
Extra groups can be created at will, to be used in node-specific modules.
|
|
|
|
# Module
|
|
|
|
The meta configuration can be evaluated as a module, to perform checks on the structure of the data.
|