forked from DGNum/infrastructure
90 lines
2.8 KiB
Markdown
90 lines
2.8 KiB
Markdown
Metadata of the DGNum infrastructure
|
|
====================================
|
|
|
|
# DNS
|
|
|
|
The DNS configuration of our infrastructure is completely defined with the metadata contained in this folder.
|
|
|
|
The different machines have records pointing to their IP addresses when they exist:
|
|
|
|
- $node.$site.infra.dgnum.eu points IN A $ipv4
|
|
- $node.$site.infra.dgnum.eu points IN AAAA $ipv6
|
|
|
|
- v4.$node.$site.infra.dgnum.eu points IN A $ipv4
|
|
- v6.$node.$site.infra.dgnum.eu points IN AAAA $ipv6
|
|
|
|
Then the services hosted on those machines can be accessed through redirections:
|
|
|
|
- $service.dgnum.eu IN CNAME $node.$site.infra.dgnum.eu
|
|
|
|
or, when targeting only a specific IP protocol:
|
|
|
|
- $service4.dgnum.eu IN CNAME ipv4.$node.$site.infra.dgnum.eu
|
|
- $service6.dgnum.eu IN CNAME ipv6.$node.$site.infra.dgnum.eu
|
|
|
|
Extra records exist for ns, mail configuration, or the main website but shouldn't change or be tinkered with.
|
|
|
|
# Network
|
|
|
|
The network configuration (except the NetBird vpn) is defined statically.
|
|
|
|
TODO.
|
|
|
|
# Nixpkgs
|
|
|
|
Machines can use different versions of NixOS, the supported and default ones are specified here.
|
|
|
|
## How to add a new version
|
|
|
|
- Switch to a new branch `nixos-$VERSION`
|
|
- Run the following command
|
|
|
|
```bash
|
|
npins add channel nixos-$VERSION
|
|
```
|
|
|
|
- Edit `meta/nixpkgs.nix` and add `$VERSION` to the supported version.
|
|
- Read the release notes and check for changes.
|
|
- Update the nodes versions
|
|
- Create a PR so that the CI check that it builds
|
|
|
|
|
|
# Nodes
|
|
|
|
The nodes are declared statically, several options can be configured:
|
|
|
|
- `deployment`, the colmena deployment option
|
|
- `stateVersion`, the state version of the node
|
|
- `nixpkgs`, the version of NixOS to use
|
|
- `admins`, the list of administrators specific to this node, they will be given root access
|
|
- `adminGroups`, a list of groups whose members will be added to `admins`
|
|
- `site`, the physical location of the node
|
|
- `vm-cluster`, the VM cluster hosting the node when appropriate
|
|
|
|
Some options are set automatically, for example:
|
|
|
|
- `deployment.targetHost` will be inferred from the network configuration
|
|
- `deployment.tags` will contain `infra-$site`, so that a full site can be redeployed at once
|
|
|
|
# Organization
|
|
|
|
The organization defines the groups and members of the infrastructure team,
|
|
one day this information will be synchronized in Kanidm.
|
|
|
|
## Members
|
|
|
|
For a member to be allowed access to a node, they must be defined in the `members` attribute set,
|
|
and their SSH keys must be available in the keys folder.
|
|
|
|
## Groups
|
|
|
|
Groups exist only to simplify the management of accesses:
|
|
|
|
- The `root` group will be given administrator access on all nodes
|
|
- The `iso` group will have its keys included in the ISOs built from the iso folder
|
|
|
|
Extra groups can be created at will, to be used in node-specific modules.
|
|
|
|
# Module
|
|
|
|
The meta configuration can be evaluated as a module, to perform checks on the structure of the data.
|