This one doesn't require us to deal with poisoning, is upgradeable and
the right thing to use when locking access to data, not IO resources.
Change-Id: I78634953a73404500d28f51f1d93a87e215c8149
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11612
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
This never did any chunking, and sled (rightfully) performs really bad
if values get too large.
We switched the default to using the objectstore backend with the local
filesystem a while ago, no need to keep this footgun around anymore.
Change-Id: I2c12672f2ea6a22e40d0cbf9161560baddd73d4a
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11616
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Userland likes to seek backwards, and until we have store composition
and can serve chunks from a local cache, we need to buffer the
individual chunks in memory.
Change-Id: I66978a0722d5f55ed4a9a49d116cecb64a01995d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11448
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
When (re)initializing a chunked reader, we were erroneously skipping the
first n bytes from all chunks, not just the first one.
Fix this, by passing in an enumerated list of chunks, and only calling
SeekFrom::Start() on the first chunk in the stream.
With this, I'm able to invoke b3sum on bin/bash successfully.
Change-Id: I52ea480569267e093b0ac9d6bcd5c2d1b4db25f7
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11443
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
Increase the discard_buf to 4096 (as I've seen this size).
Use the ready! macro to propagate pendings.
Make it more clear what exactly should be skipped in total, and what
during the current iteration.
Also write down that poll_read call already takes care of updating
self.pos, as I ran into that trap earlier (and added it here).
Change-Id: I2d22e1c8a835c0f3dd0c648917009b2bad4fd57c
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11442
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
If the resulting offset equals to our current position, there's no need
to recreate a reader.
Change-Id: I855f0c79c514c16ca48a78e12978af2835fbbd6a
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11441
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
State that this case applies if the blob is small enough to fit inside a
single chunk.
Change-Id: I0383514729e686799599b629cf1303b284147bb4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11440
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
We previously returned Ok(None) when being asked for more granular
chunking info, signalling the blob does not exist at all.
This is however incorrect, we should return an empty Vec instead, as
documented in the trait.
Change-Id: I83ecc2027e0767134c7598792c2ee6d964853c66
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11439
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
It's perfectly normal if we ask for more granular chunking info and the
backend responds it does not have it.
Change-Id: I593ab3e53b4f4e70c99f39b266546d2ac8eb10c1
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11437
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
We previously updated this.pos also in case the underlying read returned
an error.
Also, use the ready! macro to remove the match block, and instrument
errors returned during start_seek.
Change-Id: Ic32e26579d964a76b45687134acc48d72d67c36f
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11421
Reviewed-by: Brian Olsen <me@griff.name>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
This (and more) should now be covered by the generic testsuite
(in crate::blobservice::tests).
Change-Id: Ib3afc4f19f7e37a561b7398d43663dc941971f5c
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11365
Tested-by: BuildkiteCI
Reviewed-by: picnoir picnoir <picnoir@alternativebit.fr>
This introduces rstest-based tests. We also add fixtures for creating
some BlobService / DirectoryService out of thin air.
To test a PathInfoService, we don't really care too much about its
internal storage - ensuring they work is up to the castore tests.
Change-Id: Ia62af076ef9c9fbfcf8b020a781454ad299d972e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11272
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
This allows us to use containers around BlobServices as BlobServices too.
Change-Id: I3c7feb074f42b4e07c550fb8dfa63cf81d448ab5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11249
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
This functionality is provided by the object store backend too
(using `objectstore+file://$some_path`).
This backend also supports content-defined chunking and compresses
chunks with zstd.
Change-Id: I5968c713112c400d23897c59db06b6c713c9d8cb
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11205
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
This controls whether tvix-castore has support for various cloud
backends or not.
Use this to control the set of feature flags for the object_store
backend, and only enable the aws, azure and gcp ones if it's set.
In the future this can be used to enable/disable other cloud backends
too.
Without feature flags, `object_store` already supports the `InMemory`
and `LocalFilesystem` backends, and we also want to unconditionally
enable the `http` one. Make sure at least the construction of these
services is covered in the tests.
Similarly, the tvix-store crate, which provides the tvix-store CLI has a
`cloud` feature flag too (defaulting to enabled).
Change-Id: I9fb9c87b740e7dc83f8ff7a0862905d036d513f2
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11204
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
This will allow feature-flagging some of the backends.
Change-Id: Idffbf8b3fd154f5a3d938225c3871feffea8ff8c
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11200
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
We don't need to use BASE64 here on our own, B3Digest has a Display
impl.
This will also make sure the `b3:` digest is present in field values.
Change-Id: I0ce6ee0f7e7e99fb9b16872953a1b742e99be291
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11192
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Have derive_{blob,chunk}_path emit trace-level events for both the
values they're called with, as well as the return value.
With RUST_LOG in place, it doesn't get lost in other unrelated noise.
Change-Id: Id2451e3657324eff482841eb26a22d19e22bde30
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11136
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Whenever this encounters an open_read(), it'll first check for more
granular chunking. If there's more granular chunking data available, a
ChunkedReader is constructed (which supports seeking backwards).
This currently is still a bit stupid, and doesn't compose, as
`ChunkedReader` uses `self` as the `BlobService` to ask for the
individual chunks.
In store composition future, we might want to compose this differently,
essentially constructing `ChunkedReader` with another `BlobService`
representing the entire hierarchy, so there's a chance to locally cache
things, and do less requests.
Change-Id: I22e0df4d6245f666d083b4f0b7114d3ac41d1dce
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11185
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
The object_store crate supports a ton of different stores, with different schemes.
For now, use a objectstore+ scheme prefix to enable these.
Change-Id: I946f76e32a0fb0867ef59060217894cda5b959b9
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11080
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
This uses the `object_store` crate to expose a tvix-castore BlobService
backed by object storage.
It's using FastCDC to chunk blobs into smaller chunks when writing to
it.
These are exposed at the .chunks() method.
Change-Id: I2858c403d4d6490cdca73ebef03c26290b2b3c8e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11076
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Reviewed-by: Brian Olsen <me@griff.name>
If there's an unexpected test failure, print it out, rather than just
saying something is false even though it should be true.
Use .expect() for this, which displays the error if it failed.
We can't use expect_err(), as our stores are not display'able, so use an
assertion with a message there.
Change-Id: I2d88861d979d107edc0717fbdb3cdac9a6bfc5e4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11091
Tested-by: BuildkiteCI
Reviewed-by: Brian Olsen <me@griff.name>
Reviewed-by: flokli <flokli@flokli.de>
The public-consumable thing here is ChunkedReader, not ChunkedBlob.
ChunkedBlob is a helper that can be used to get a new AsyncRead, but
not AsyncSeek. It is used internally by ChunkedReader whenever the
client seeks.
Make this more obvious, by extending the documentation, and putting
ChunkedReader at the top of this file.
Also make ChunkedBlob and its methods private, and give ChunkedReader a
more useful constructor (from_chunks, instead of from_chunked_blob).
Change-Id: I2399867591df923faa73927b924e7c116ad98dc0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11079
Tested-by: BuildkiteCI
Reviewed-by: Brian Olsen <me@griff.name>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
These provide seekable access into a Blob for which we have more
granular chunking information.
There's no support for verified streaming in here yet, this simply
produces a stream of readers for each chunk, skipping irrelevant chunks
and data from the first chunk at the beginning.
A seek simply does produce a new reader using the same process.
Change-Id: I37f76b752adce027586770475435f3990a6dee0b
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10731
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
All chunks must have valid blake3 digests. It is allowed to send an
empty list, if no more granular chunking is available.
Change-Id: I7ecb53579cdf40fd938bb68a85685751b4d3626f
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10726
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Autosubmit: flokli <flokli@flokli.de>
BlobService already implies Send and Sync, we don't need to explicitly
list it here.
Change-Id: I58a4c5912be61a60acd961565979aa01d94ee0f7
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10727
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
This adds support to retrieve a list of chunks for a given blob to the
BlobService interface.
While theoretically all chunk-awareness could be kept private inside
each BlobService reader, we'd not be able to resolve individual chunks
from different Blobservices - and due to this, not able to substitute
chunks we already have in a more local store.
This function allows asking a BlobService for the list of chunks,
leaving any actual fetching up to the caller (be it through individual
calls to open_read), or asking another store for it.
Change-Id: I1d33c591195ed494be3aec71a8c804743cbe0dca
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10586
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
The docstrings were not updated once we made the BlobService trait async.
There's no more need to turn things into a sync reader.
Also, rearrange the stream manipulation a bit, and remove the need to
create a new VecDeque for each element in the stream. bytes::Bytes
implements the Buf trait.
Fixes b/289.
Change-Id: Id2bbedca5876b462e630c144b74cc289c3916c4d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10582
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
While we currently mostly use it in an Arc, as we need to clone it
inside PathInfoService, there might be other usecases not requiring it
to be Clone.
Change-Id: I7bd337cd2e4c2d4154b385461eefa62c9b78345d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10482
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
The simple filesystem `BlobService` enable a user to write blob store
on an existing filesystem using a prefix-style layout in the provided root directory,
e.g. the two first bytes of the blake3 hashes are used as directories prefixes.
Change-Id: I3451a688a6f39027b9c6517d853b95a87adb3a52
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10071
Autosubmit: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
For all these calls, the caller has enough context about what it did, so
it should be fine to use io::Result here.
We pretty much only constructed crate::Error::StorageError before
anyways, so this conveys *more* information.
Change-Id: I5cabb3769c9c2314bab926d34dda748fda9d3ccc
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10328
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
We don't really require the Path to be a PathBuf, we don't even require
it to be a Path, we only need it to be AsRef<Path>>.
This removes some conversion in the from_addr cases, which can just
reuse `url.path()` (a `&str`).
Change-Id: I38d536dbaf0b44421e41f211a9ad2b13605179e9
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10258
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
It's been a while since the last sled release, and that one binds to a
pretty old version of zstd, requiring workarounds like cl/10090.
Upstream sled main branch currently has zstd halfway patched out (it's
a no-op, but the feature flag and options are still there), and it's in
that state for a year.
Rather than maintaining our own fork of sled, let's just stop using the
compression feature in sled, dropping the version pin to zstd that way,
removing the need for cl/10090.
This doesn't mean we won't reintroduce per-blob compression - but we
probably just won't let sled take care of the compression, but do it
ourselves - which is necessary for more chunked blob storage anyways.
Even though we do drop the feature flag, we still need to explicitly use
use_compression(false).
Change-Id: I0e4892d29e41c76653272dc1a3625180da6fee12
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10257
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
We match to destructure a single pattern.
Change-Id: I564a3510b4860e90b3315a9639effc48ee88b483
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10233
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI