This allows us to use containers around DirectoryServices as DirectoryServices too.
Change-Id: I56cca27b3212858db8b12b874df0e567dd868711
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11248
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
This uses DirectoryClosureValidator for validation and the sled batch
API to insert multiple directories at once.
Change-Id: I2d6dc513ccbc02e638f8d22173da5463e73182ee
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11222
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
This greatly simplifies the code in this function, replacing it with a
much better tested (and more capable!) version of the validation logic.
It also enables the gRPC server frontend to make use of the
DirectoryPutter interface. While this might not be too visible in terms
of latency thanks to gRPC streams bursting, it also enables further
optimizations later (such as bucketing of directory closures).
Change-Id: I21f805aa72377dd5266de3b525905d9f445337d6
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11221
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
This simplifies a bunch of code, and gets rid of some TODOs.
Also, move it out of castore/utils, and into its own file.
Change-Id: Ie63e05a6cdfb2a73e878cf7107f9172aed1cdf13
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11224
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: flokli <flokli@flokli.de>
This can be used to validate a Directory closure (connected DAG of
Directories), and their insertion order.
Directories need to be inserted (via `add`), in an order from the leaves
to the root. During insertion, we validate as much as we can at that
time:
- individual validation of Directory messages
- validation of insertion order (no upload of not-yet-known Directories)
- validation of size fields of referred Directories
Internally it keeps all received Directories (and their sizes) in a HashMap,
keyed by digest.
Once all Directories have been inserted, a drain() function can be
called to get a (deduplicated and) validated list of directories, in
from-leaves-to-root order (to be stored somewhere).
While assembling that list, a check for graph connectivity is performed
too, to ensure there's no separate components being sent (and only one
root).
It adds a test suite for these cases, which is much nicer to test than
where we previously had these checks (only in the gRPC server wrapper).
Followup CLs will move the existing putters to use this.
Change-Id: Ie88c832924c170a24626e9e3e91d868497b5d7a4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11220
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: flokli <flokli@flokli.de>
We need to ensure the Directories are successfully uploaded before doing
any testing with them.
Change-Id: Iafa8deb86b3d5eb302ebfba3ced34385f67a7229
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11244
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
This allows these messages to be put in HashSets.
Change-Id: Ia58094cafe53eb624578821d3d8d969c5d21a1d7
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11219
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
Log the entire span with "trace" level, not just its `ret` level.
The level of the error value event defaults to ERROR, so we don't loose
these.
B3Digest implements Debug and Display the same way, so we can omit the
`(Display)` part in `ret(Display)` for them.
Change-Id: Id00d123a5798e5bdc9820dd97ae2b4d4eb5455f0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11218
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
This is no public API to construct this, there's exactly one caller,
and it's perfectly fine to directly populate the struct there.
Change-Id: Idae43a0162ee9bc687d21c550e0c9df33f12d263
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11217
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
This makes it easier to see what's going wrong when uploading multiple
Directories.
Change-Id: Ieb71424b9761777c5f719b2f365962644de82baf
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11209
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
This functionality is provided by the object store backend too
(using `objectstore+file://$some_path`).
This backend also supports content-defined chunking and compresses
chunks with zstd.
Change-Id: I5968c713112c400d23897c59db06b6c713c9d8cb
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11205
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
This controls whether tvix-castore has support for various cloud
backends or not.
Use this to control the set of feature flags for the object_store
backend, and only enable the aws, azure and gcp ones if it's set.
In the future this can be used to enable/disable other cloud backends
too.
Without feature flags, `object_store` already supports the `InMemory`
and `LocalFilesystem` backends, and we also want to unconditionally
enable the `http` one. Make sure at least the construction of these
services is covered in the tests.
Similarly, the tvix-store crate, which provides the tvix-store CLI has a
`cloud` feature flag too (defaulting to enabled).
Change-Id: I9fb9c87b740e7dc83f8ff7a0862905d036d513f2
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11204
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
The rust trait was missing to document the order of the elements in the
stream. Document that, and also the reasoning behind this.
Change-Id: I27ef0b2020082783fc41c2015233175e2b8e716d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11203
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
This will allow feature-flagging some of the backends.
Change-Id: Iddbdb89d3cf9c966a2c25b06b03e6917b284cae5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11201
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: flokli <flokli@flokli.de>
This will allow feature-flagging some of the backends.
Change-Id: Idffbf8b3fd154f5a3d938225c3871feffea8ff8c
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11200
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
We don't need to use BASE64 here on our own, B3Digest has a Display
impl.
This will also make sure the `b3:` digest is present in field values.
Change-Id: I0ce6ee0f7e7e99fb9b16872953a1b742e99be291
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11192
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Have derive_{blob,chunk}_path emit trace-level events for both the
values they're called with, as well as the return value.
With RUST_LOG in place, it doesn't get lost in other unrelated noise.
Change-Id: Id2451e3657324eff482841eb26a22d19e22bde30
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11136
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Whenever this encounters an open_read(), it'll first check for more
granular chunking. If there's more granular chunking data available, a
ChunkedReader is constructed (which supports seeking backwards).
This currently is still a bit stupid, and doesn't compose, as
`ChunkedReader` uses `self` as the `BlobService` to ask for the
individual chunks.
In store composition future, we might want to compose this differently,
essentially constructing `ChunkedReader` with another `BlobService`
representing the entire hierarchy, so there's a chance to locally cache
things, and do less requests.
Change-Id: I22e0df4d6245f666d083b4f0b7114d3ac41d1dce
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11185
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Use the same format as Display, b3: followed by the base64
representation. This makes the debug implementation of everything
containing a b3 digest much nicer to read.
Change-Id: I3ca3154d0b6fb07781c8f9c83ece3ff1a6957902
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11181
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
This bumps tonic and surrounding crates to 0.11.x.
We added support for tonic 0.11.x into tokio-listener
(https://github.com/vi/tokio-listener/pull/4), so that's bumped as well.
Change-Id: Icfade5894403228299836fefb21b2f9ae59dbebb
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11156
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
`build.rs` emits rerun-if-changed statements for all proto files, as
well as all include paths we pass it.
Unfortunately, due to protobufs include path rules, we need to specify
the path to the depot root itself as an include path, at least when
building impurely with `cargo`. This causes cargo to essentially always
rebuild, as it also puts its own temporary files in there.
Unfortunately, tonic-build does not chase down to individual .proto
files that are included.
Disable emitting these `rerun-if-changed` statements for now.
This could cause cargo to not rebuild protos every time, causing stale
data until the next local `cargo clean`, but considering the protos
change not that frequently, and it'll immediately surface if trying to
build via Nix (either locally or in CI), it's a good-enough compromise.
Change-Id: Ifd279a2216222ef3fc0e70c5a2fe6f87997f562e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11157
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
The object_store crate supports a ton of different stores, with different schemes.
For now, use a objectstore+ scheme prefix to enable these.
Change-Id: I946f76e32a0fb0867ef59060217894cda5b959b9
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11080
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
This uses the `object_store` crate to expose a tvix-castore BlobService
backed by object storage.
It's using FastCDC to chunk blobs into smaller chunks when writing to
it.
These are exposed at the .chunks() method.
Change-Id: I2858c403d4d6490cdca73ebef03c26290b2b3c8e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11076
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Reviewed-by: Brian Olsen <me@griff.name>
This only contains the outer metadata wrapping, and that's not too interesting:
> Request { metadata: MetadataMap { headers: {"content-type":
> "application/grpc", "user-agent": "grpc-go/1.60.1", "te": "trailers",
> "grpc-accept-encoding": "gzip"} }, message: Streaming, extensions:
> Extensions }
Drop these fields for now, and rely on the underlying implementations to
add instrumentation for the application-specific fields.
Clean up the error logging a bit.
Change-Id: Ife1090ed411766a61e1fa60fd4c9570f38de1e98
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11102
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
This only contains the outer metadata wrapping, and that's not too interesting:
> Request { metadata: MetadataMap { headers: {"content-type":
> "application/grpc", "user-agent": "grpc-go/1.60.1", "te": "trailers",
> "grpc-accept-encoding": "gzip"} }, message: Streaming, extensions:
> Extensions }
Drop these fields for now, and rely on the underlying implementations to
add instrumentation for the application-specific fields.
Log errors in some places where we didn't so far.
Change-Id: Ia68d6c526987d3716be62a0809195401cf28512b
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11101
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
For everything using reqwest here during test cases, we also need to
set SSL_CERT_FILE.
Change-Id: If8aeda65f3d75cb9ac5c9bc64e37a0cb7dffc17c
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11092
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
If there's an unexpected test failure, print it out, rather than just
saying something is false even though it should be true.
Use .expect() for this, which displays the error if it failed.
We can't use expect_err(), as our stores are not display'able, so use an
assertion with a message there.
Change-Id: I2d88861d979d107edc0717fbdb3cdac9a6bfc5e4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11091
Tested-by: BuildkiteCI
Reviewed-by: Brian Olsen <me@griff.name>
Reviewed-by: flokli <flokli@flokli.de>
HashingReader wraps an existing AsyncRead, and allows querying for the
digest of all data read "through" it.
The hash function is configurable by type parameter, and we define
B3HashingReader.
Change-Id: Ic08142077566fc08836662218f5ec8c3aff80be5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11087
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
This allows calling .into() to get a B3Digest.
Change-Id: I6e63b496413cd00d84acfcd15c7de0f64c79721f
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11086
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
The public-consumable thing here is ChunkedReader, not ChunkedBlob.
ChunkedBlob is a helper that can be used to get a new AsyncRead, but
not AsyncSeek. It is used internally by ChunkedReader whenever the
client seeks.
Make this more obvious, by extending the documentation, and putting
ChunkedReader at the top of this file.
Also make ChunkedBlob and its methods private, and give ChunkedReader a
more useful constructor (from_chunks, instead of from_chunked_blob).
Change-Id: I2399867591df923faa73927b924e7c116ad98dc0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11079
Tested-by: BuildkiteCI
Reviewed-by: Brian Olsen <me@griff.name>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Some implementations of DirectoryService might not allow retrieval of
intermediate Directory nodes, that are not at the "root".
Think about an object store implementation. The client is doing a
get_recursive anyways to reduce the number of roundtrips.
By documenting the fact we don't need to support looking up intermediate
Directory messages, we can just batch all directories into the same
object, keyed by the root.
Change-Id: I019d720186d03c4125cec9191e93d20586a20963
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10988
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
This otherwise gets a bit spammy.
Change-Id: I288350a600d79a394c239f253424ad55bc3cefc5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10954
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
These provide seekable access into a Blob for which we have more
granular chunking information.
There's no support for verified streaming in here yet, this simply
produces a stream of readers for each chunk, skipping irrelevant chunks
and data from the first chunk at the beginning.
A seek simply does produce a new reader using the same process.
Change-Id: I37f76b752adce027586770475435f3990a6dee0b
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10731
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
docs/verified-streaming.md explained how CDC and verified streaming can
work together, but didn't really highlight enough how chunking in
general also helps with seeking.
In addition, a lot of the thoughts w.r.t. the BlobStore protocol, both
gRPC and Rust traits, as well as why there's no support for seeking
directly in gRPC, as well as how clients should behave w.r.t. chunked
fetching was missing, or mixed together with the verified streaming
bits.
While there is no verified streaming version yet, a chunked one is
coming soon, and documenting this a bit better is gonna make it easier
to understand, as well as provide some lookout on where this is heading.
Change-Id: Ib11b8ccf2ef82f9f3a43b36103df0ad64a9b68ce
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10733
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
The Stat() method was just always signalling no granular chunks are
available. However, as we now have a .chunks() method, we can expose it
over gRPC.
Change-Id: I74f0890ae083f301bb0cec62f1ea4a95463ac590
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10736
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
All chunks must have valid blake3 digests. It is allowed to send an
empty list, if no more granular chunking is available.
Change-Id: I7ecb53579cdf40fd938bb68a85685751b4d3626f
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10726
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
This can be written without the additional function.
Change-Id: Ib11c5d5254d3e44c8fa9661414835b0622eb1ac4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10735
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
"given chunksize" is misleading here. It's up to the backend to decide
if it does chunking at all, and how it chunks.
Change-Id: I4f130ca9ac34db79f18ef1d6475295806ac7f9a4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10728
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
BlobService already implies Send and Sync, we don't need to explicitly
list it here.
Change-Id: I58a4c5912be61a60acd961565979aa01d94ee0f7
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10727
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
In the past, we had a `todo!` on unsupported node types, this returns a proper error
that can be caught by the caller.
Change-Id: Icba4c1dab33c0d670a97f162c9b358d1ed5855cb
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10675
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
The BoxStream type alias is a more concise and easier to read than
the full `Pin<Box<dyn Stream<Item = ...> + Send + ...>>` type.
Change-Id: I5b7bccfd066ded5557e01f7895f4cf5c4a33bd44
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10677
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Autosubmit: Connor Brewster <cbrewster@hey.com>
In one function that does the heavy lifting: `ingest_entries`, and three additional helpers:
- `walk_path_for_ingestion` which perform the tree walking in a very naive way and can be replaced by the user
- `leveled_entries_to_stream` which transforms a list of a list of
entries ordered by their depth in the tree to a stream of entries in
the bottom to top order (Merkle-compatible order I will say in the
future).
- `ingest_path` which calls the previous functions.
Change-Id: I724b972d3c5bffc033f03363255eae448f017cef
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10573
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
Autosubmit: raitobezarius <tvl@lahfa.xyz>