da6cbb4a45
We previously kept the trait of a BlobService sync. This however had some annoying consequences: - It became more and more complicated to track when we're in a context with an async runtime in the context or not, producing bugs like https://b.tvl.fyi/issues/304 - The sync trait shielded away async clients from async worloads, requiring manual block_on code inside the gRPC client code, and spawn_blocking calls in consumers of the trait, even if they were async (like the gRPC server) - We had to write our own custom glue code (SyncReadIntoAsyncRead) to convert a sync io::Read into a tokio::io::AsyncRead, which already existed in tokio internally, but upstream ia hesitant to expose. This now makes the BlobService trait async (via the async_trait macro, like we already do in various gRPC parts), and replaces the sync readers and writers with their async counterparts. Tests interacting with a BlobService now need to have an async runtime available, the easiest way for this is to mark the test functions with the tokio::test macro, allowing us to directly .await in the test function. In places where we don't have an async runtime available from context (like tvix-cli), we can pass one down explicitly. Now that we don't provide a sync interface anymore, the (sync) FUSE library now holds a pointer to a tokio runtime handle, and needs to at least have 2 threads available when talking to a blob service (which is why some of the tests now use the multi_thread flavor). The FUSE tests got a bit more verbose, as we couldn't use the setup_and_mount function accepting a callback anymore. We can hopefully move some of the test fixture setup to rstest in the future to make this less repetitive. Co-Authored-By: Connor Brewster <cbrewster@hey.com> Change-Id: Ia0501b606e32c852d0108de9c9016b21c94a3c05 Reviewed-on: https://cl.tvl.fyi/c/depot/+/9329 Reviewed-by: Connor Brewster <cbrewster@hey.com> Tested-by: BuildkiteCI Reviewed-by: raitobezarius <tvl@lahfa.xyz> |
||
---|---|---|
.. | ||
docs | ||
protos | ||
src | ||
build.rs | ||
Cargo.toml | ||
default.nix | ||
README.md |
//tvix/store
This contains the code hosting the tvix-store.
For the local store, Nix realizes files on the filesystem in /nix/store
(and
maintains some metadata in a SQLite database). For "remote stores", it
communicates this metadata in NAR (Nix ARchive) and NARInfo format.
Compared to the Nix model, tvix-store
stores data on a much more granular
level than that, which provides more deduplication possibilities, and more
granular copying.
However, enough information is preserved to still be able to render NAR and NARInfo when needed.
More Information
Check the protos/
subfolder for the definition of the exact RPC methods and
messages.
Interacting with the GRPC service manually
The shell environment in //tvix
provides evans
, which is an interactive
REPL-based gPRC client.
You can use it to connect to a tvix-store
and call the various RPC methods.
$ cargo run -- daemon &
$ evans --host localhost --port 8000 -r repl
______
| ____|
| |__ __ __ __ _ _ __ ___
| __| \ \ / / / _. | | '_ \ / __|
| |____ \ V / | (_| | | | | | \__ \
|______| \_/ \__,_| |_| |_| |___/
more expressive universal gRPC client
tvix.store.v1@localhost:8000> service BlobService
tvix.store.v1.BlobService@localhost:8000> call Put --bytes-from-file
data (TYPE_BYTES) => /run/current-system/system
{
"digest": "KOM3/IHEx7YfInAnlJpAElYezq0Sxn9fRz7xuClwNfA="
}
tvix.store.v1.BlobService@localhost:8000> call Get --bytes-as-base64
digest (TYPE_BYTES) => KOM3/IHEx7YfInAnlJpAElYezq0Sxn9fRz7xuClwNfA=
{
"data": "eDg2XzY0LWxpbnV4"
}
$ echo eDg2XzY0LWxpbnV4 | base64 -d
x86_64-linux
Thanks to tvix-store
providing gRPC Server Reflection (with reflection
feature), you don't need to point evans
to the .proto
files.