Putting this into UnitConfig won't work, so the bind mount didn't
happen, causing the blobs to be created on the SSD too.
This was already deployed and the data migrated over.
Change-Id: Ie30c8f458cdad8b764817a48a048ec3ca3c18e64
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12922
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Move the Directory and PathInfo storage to the SSD, and only bind-mount
the blob storage from the HDD.
This should improve IO for random access.
Change-Id: Icf9408a879dec8a52541953682ffac25b31e73d3
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12921
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
This is a bandaid until we have a proper fix.
Change-Id: Id9f0bab5f309a7796c1efee23071013618c6dd12
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12896
Autosubmit: Jonas Chevalier <zimbatm@zimbatm.com>
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
With nar-bridge supporting zstd content-encoding, we don't need the
nginx zstd module and can re-enable http2.
We also need to propagate the Accept-Encoding sent by the client to
nar-bridge, so it actually knows it can send zstd.
This reduces the time measured in the microbenchmark from ~13s to this:
```
hyperfine 'rm -rf /tmp/cache; nix copy --from https://nixos.tvix.store/ --to "file:///tmp/cache?compression=none" /nix/store/jlkypcf54nrh4n6r0l62ryx93z752hb2-firefox-132.0'
Benchmark 1: rm -rf /tmp/cache; nix copy --from https://nixos.tvix.store/ --to "file:///tmp/cache?compression=none" /nix/store/jlkypcf54nrh4n6r0l62ryx93z752hb2-firefox-132.0
Time (mean ± σ): 4.880 s ± 0.207 s [User: 4.661 s, System: 2.377 s]
Range (min … max): 4.700 s … 5.274 s 10 runs
```
Change-Id: Id092307423636163ae95ef87ec8fa558b83ce0bb
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12835
Reviewed-by: Jörg Thalheim <joerg@thalheim.io>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: Ilan Joselevich <personal@ilanjoselevich.com>
We don't need a separate instance of opentelemetry-collector, alloy can
also do this job for us.
Change-Id: I1b671ba57d70b080f7db112e1afcfe2e0cbdd13e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12829
Reviewed-by: flokli <flokli@flokli.de>
Reviewed-by: Jonas Chevalier <zimbatm@zimbatm.com>
Tested-by: BuildkiteCI
These are quite bursty, and I've seen messages about getting rate
limited.
Change-Id: I73058140957cb5718971fa432c003c2d1b0305e3
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12828
Tested-by: BuildkiteCI
Reviewed-by: Ilan Joselevich <personal@ilanjoselevich.com>
Use grafana-alloy to collect system metrics.
Change-Id: I592e64ca722701d4f12e69a531a434b54954955a
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12827
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
This enables routing of metrics to an instance of VictoriaMetrics, and
configures opentelemetry-collector to route metrics there.
Change-Id: If765191a4cc70ddcaad821d45132b96a10a12148
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12812
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: Jonas Chevalier <zimbatm@zimbatm.com>
This is a fetch-through mirror of cache.nixos.org, hosted by NumTide.
The current machine is a SX65 Hetzner dedicated server with 4x22TB SATA disks,
and 2x1TB NVMe disks.
The goals of this machine:
- Exercise tvix-store and nar-bridge code
- Collect usage metrics (see https://nixos.tvix.store/grafana)
- Identify bottlenecks
- Replace cache.nixos.org?
Be however aware that there's zero availability guarantees. Since Tvix doesn't
support garbage collection yet, we either will delete data or order a bigger
box.
Change-Id: Id24baa18cae1629a06caaa059c0c75d4a01659d5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12811
Tested-by: BuildkiteCI
Reviewed-by: Jonas Chevalier <zimbatm@zimbatm.com>
Reviewed-by: flokli <flokli@flokli.de>