DECODE-BASE64-STREAM-TO-SEQUENCE is the only thing that requires
anything fancy: We read into an adjustable array. Alternative could be
using REDIRECT-STREAM and WITH-OUTPUT-TO-STRING, but that is likely
slower (untested).
Test cases are kept for now to confirm that qbase64 is conforming to our
expectations, but can probably dropped in favor of a few more sample
messages in the test suite.
:START and :END are sadly no longer supported and need to be replaced by
SUBSEQ.
Change-Id: I5928aed7551b0dea32ee09518ea6f604b40c2863
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8586
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
Autosubmit: sterni <sternenseemann@systemli.org>
Seems simple enough to use standard LET and a few parentheses more which
stock emacs can indent probably.
Change-Id: I0137a532186194f62f3a36f9bf05630af1afcdae
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8584
Reviewed-by: sterni <sternenseemann@systemli.org>
Autosubmit: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
Eventually, we'll want to replace dump-stream-binary with something more
efficient—given that we have flexi-streams we can use something that
only does matching element types no problem. REDIRECT-STREAM is much
more efficient thanks to using an internal buffer.
streams.lisp gets a new section at the beginning for grouping utilities
that don't have any real (internal) dependencies.
Change-Id: I141cd36440d532131f389be2768fdaa54e7c7218
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8583
Reviewed-by: sterni <sternenseemann@systemli.org>
Autosubmit: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
Porting over the rest of the decoding (RFC2047) and especially encoding
over to qbase64 is still pending, as it is a little trickier.
Change-Id: Id4740eb074a387aeea2cb94b781e204248530799
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8582
Autosubmit: sterni <sternenseemann@systemli.org>
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
The input adapter streams were input streams yielding either binary or
character data that could be constructed from a variable data source.
The stream would take care not to destroy the underlying data
source (i.e. not close it if it was a stream), so similar to with
FILE-PORTIONs, but simpler.
Unfortunately, the implementation was quite inefficient: They are
ultimately defined in terms of a function that retrieves the next
character in the source. This only allows for an implementation of
READ-CHAR (and READ-BYTE). Thanks to cl/8559, READ-SEQUENCE can be used
on e.g. FILE-PORTION, but this was still negated by a input adapter
based on one—then, READ-SEQUENCE would need to fall back on READ-CHAR or
READ-BYTE again.
Luckily, we can replace BINARY-INPUT-ADAPTER-STREAM and
CHARACTER-INPUT-ADAPTER-STREAM with a much simpler abstraction: Instead
of extra stream classes, we have a function, MAKE-INPUT-ADAPTER, which
returns an appropriate instance of FLEXI-STREAM based on a given source.
This way, the need for a distinction between binary and character input
adapter is eliminated, since FLEXI-STREAMS supports both binary and
character reads (external format is not yet handled, though).
Consequently, the :binary keyword argument to MIME-BODY-STREAM can be
dropped.
flexi-streams provides stream classes for everything except a stream
that doesn't close the underlying one. Since we have already implemented
this in POSITIONED-FLEXI-INPUT-STREAM, we can split this functionality
into a new superclass ADAPTER-FLEXI-INPUT-STREAM.
This change also allows addressing the performance regression
encountered in cl/8559: It seems that flexi-streams performs worse when
we are reading byte by byte or char by char. (After this change mblog is
still two times slower than on r/6150.) By eliminating the adapter
streams, we can start utilizing READ-SEQUENCE via decoding code that
supports it (i.e. qbase64) and bring performance on par with r/6150
again. Surely there are also ways to gain back even more performance
which has to be determined using profiling. Buffering more aggressively
seems like a sure bet, though.
Switching to flexi-streams still seems like a no-brainer, as it allows
us to drop a lot of code that was quite hacky (e.g. DELIMITED-INPUT-
STREAM) and implements en/decoding handling we did not support before,
but would need for improved correctness.
Change-Id: Ie2d1f4e42b47512a5660a1ccc0deeec2bff9788d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8581
Autosubmit: sterni <sternenseemann@systemli.org>
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
This refactor is driven by the following (ultimate) aims:
- Get rid of as much of the custom stream code in mime4cl which makes
less code to maintain in the future.
- Lay the groundwork for correct handling of 8bit transfer encoding:
The mime4cl we inherited assumes that any MIME message can be decoded
completely by the CL implementation (in SBCL's case using latin1)
into CHARACTERs. This is not necessarily the case. flexi-streams
allows changing how the stream is decoded on the fly and also has
support for reading the underlying bytes which is perfect for the
requirements decoding MIME has.
- Since flexi-streams uses trivial-gray-streams, it supports
READ-SEQUENCE. Taking advantage of this may improve decoding
performance significantly in the future.
This incurs the following changes:
- Naturally we now open given files as binary files in MIME-MESSAGE.
Given strings are encoded using STRING-TO-OCTETS and then passed on
to a new octet vector method. Instead of MY-STRING-INPUT-STREAM this
now uses flexi-streams' WITH-INPUT-FROM-SEQUENCE.
- OPEN-FILE-PORTION and OPEN-DECODED-FILE-PORTION need to be merged,
since the transfer encoding not only implies an extra decoder stream
that needs to be attached after file portion stream, but also imply a
certain encoding of the stream itself (mostly binary vs. ASCII).
As flexi-streams can change their encoding on the fly this could be
untangled again, but it is not strictly necessary.
As before, we use the DATA slot of the file portion to create a fresh
stream if possible. Instead of strings we now use an vector of octets
to match MIME-MESSAGE.
The actual portioned stream relies on POSITIONED-FLEXI-INPUT-STREAM, a
subclass of the stock FLEXI-INPUT-STREAM class, described below.
- POSITIONED-FLEXI-INPUT-STREAM replaces DELIMITED-INPUT-STREAM. It is
created using MAKE-POSITIONED-FLEXI-INPUT-STREAM which accepts the
same arguments as MAKE-FLEXI-STREAMS and, additionally, :IGNORE-CLOSE.
A POSITIONED-FLEXI-INPUT-STREAM works the same as an
FLEXI-INPUT-STREAM, but upon creation, the underlying stream is
rewinded or forwarded to the argument given by :POSITION using
FILE-POSITION.
If :IGNORE-CLOSE is T, a call to CLOSE is not forwarded to the
underlying stream.
Change-Id: I2d48c769bb110ca0b7cf52441bd63c1e1c2ccd04
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8559
Reviewed-by: sterni <sternenseemann@systemli.org>
Autosubmit: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
This walks from a node further down until it reaches the requested path.
Change-Id: I2f9a15a8601db4d06c95d7b47cd6153264e203e3
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8568
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
This distinguishes it better from the EvalIO::import_path method.
Also update the docstring to explain what it does (and what it doesn't).
Change-Id: I32a8b2869fa67a894df28532b22bf170961a2abf
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8578
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
This was missed while renaming NixPath to StorePath.
Change-Id: Ibcc929c43b111e4370e8222c1dd86d403548367f
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8577
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Since rt.lisp seems to start tests in parallel, the informational output
about which sample file is being tested gets mangled in all sorts of
ways. The solution is to just loop over the sample files outside a test
and schedule a single test case per sample file from there.
Change-Id: I4494e4a526ce6d92a298cf7daf06c8013c7ca605
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8569
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
We currently only support querying by the output hash digest.
This makes the interface a bit simpler.
Change-Id: I80b285373f1923e85cb0e404c4b15d51a7f259ef
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8570
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
This allows decomposing a path consisting of a store path AND a suffix
into a StorePath, and a PathBuf containing the rest.
Change-Id: I81290e2fd804cdc9d1e88c71cb22c0fb882d7936
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8567
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Make it cleaner that StorePath only does encode the first path component
after the STORE_DIR prefix. Also, move some of the comments around a
bit, so it makes more sense what's using what.
Change-Id: Ibb57373a13526e30c58ad561ca50e1336b091d94
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8566
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
In case `target_user_ssh_key` points to an empty string, nixos-copy.sh
just doesn't set `IdentityFile=` at all.
This allows using deploy-nixos without any explicitly passed ssh keys,
but picking up whatever ssh setup the user has configured locally.
Change-Id: If335ce8434627e61da13bf6923b9767085af08a5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8576
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
It's okay if these calls mutate some internal state inside an
implementation.
Change-Id: I12bb11bde0310778c3da1275696bf7de058863a3
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8571
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
Make it more explicit that we expect the from_string calls to fail here.
Change-Id: Ib3d46fc0850e364125e3548670ef301eeea2e45c
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8565
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
This connects to a (remote) tvix-store BlobService over gRPC.
Change-Id: If31f706738a5c3445886c117feca8b61f3203e9e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8552
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
emacs-overlay has been held back because package(s) needed for
//users/sterni/emacs are broken in the latest version.
Change-Id: Icb8bf34b4d039f5c24ec8f30fd8f47205a343988
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8562
Tested-by: BuildkiteCI
Autosubmit: tazjin <tazjin@tvl.su>
Reviewed-by: flokli <flokli@flokli.de>
Whether chunking is involved or not, is an implementation detail of each
Blobstore. Consumers of a whole blob shouldn't need to worry about that.
It currently is not visible in the gRPC interface either. It
shouldn't bleed into everything.
Let the BlobService trait provide `open_read` and `open_write` methods,
which return handles providing io::Read or io::Write, and leave the
details up to the implementation.
This means, our custom BlobReader module can go away, and all the
chunking bits in there, too.
In the future, we might still want to add more chunking-aware syncing,
but as a syncing strategy some stores can expose, not as a fundamental
protocol component.
This currently needs "SyncReadIntoAsyncRead", taken and vendored in from
https://github.com/tokio-rs/tokio/pull/5669.
It provides a AsyncRead for a sync Read, which is necessary to connect
our (sync) BlobReader interface to a GRPC server implementation.
As an alternative, we could also make the BlobReader itself async, and
let consumers of the trait (EvalIO) deal with the async-ness, but this
is less of a change for now.
In terms of vendoring, I initially tried to move our tokio crate to
these commits, but ended up in version incompatibilities, so let's
vendor it in for now.
Change-Id: I5969ebbc4c0e1ceece47981be3b9e7cfb3f59ad0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8551
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
We already returned UnexpectedEof in case the reader stopped returning
bytes too early, but similarly we should also fail if there's still
bytes left to be read in the reader passed.
We normally use the NAR writer to produce new NAR files, so the readers
point to the blobs we actually want to render, and having some data left
in there should be an error.
If for some reason the reader points to more data than just the blob,
the `.take` method can be used to limit it to the (known) size.
Change-Id: I9e8fa0a6dd9c794492abb6dc9e55995e619cb3bb
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8553
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
This makes sure that initializing coder-stream-mixin (for the most part)
has the same interface as initializing qbase64:decode-stream. This will
make integrating that as a faster replacement to
mime4cl:base64-decoder-stream a bit easier.
The idea is to replace the char by char base64 decoder with one that
supports read-sequence. After that deliminited-input-stream needs to
gain support for read-sequence as well, so we can actually take
advantage of this fact. Finally, we'll have to evaluate the remaining
decoders and think about switching the (base64) encoders over as well.
Change-Id: If971da02437506e00a7c9fab2b94efc42725e62d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8555
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
Autosubmit: sterni <sternenseemann@systemli.org>
For whatever reason, there were two sort of identical tests, mime.1 and
mime.2, in the mime4cl test suite: The former tested *sample1-file* and
the latter all messages *samples-directory*—in the same way, parsing the
original and a re-rendered version of the message to check if they were
equal.
We can just move sample1.msg into *samples-directory*, get rid
of *sample1-file* and thus pave the way for more test messages in the
future.
Change-Id: I843be331682b731af6ae02a4648ba1c64aaf59a5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8546
Reviewed-by: sterni <sternenseemann@systemli.org>
Autosubmit: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
The generic function itself needs to be defined using defgeneric,
defmethod is used for a defining method of a generic function, i.e. how
it should behave when confronted with a certain class.
Change-Id: Idd38afa02b56c5002e215decfff7f0c25267eab5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8532
Autosubmit: sterni <sternenseemann@systemli.org>
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
These packages are invalid in Nix, and worked around in nixpkgs with
underscores, but the underscores are invalid in the Docker registry
protocol.
We work around this by detecting this case and adding the underscore
to yield the correct package reference. There is no case where this
workaround can break something, as there can be no valid package
matching the regular expression.
This relates to https://github.com/tazjin/nixery/issues/158
Change-Id: I7990cdb534a8e86c2ceee2c589a2636af70a4a03
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8531
Tested-by: BuildkiteCI
Autosubmit: tazjin <tazjin@tvl.su>
Reviewed-by: flokli <flokli@flokli.de>
This uses tonic to generate the full set of gRPC clients for Yandex
Cloud. Includes some utility functions like an authentication
interceptor to make these actually work.
Since the upstream protos are exported regularly I've decided that the
versioning will simply be date-based.
The point of this is journaldriver integration, of course, hence also
the log-centric example code.
Change-Id: I00a615dcba80030e7f9bcfd476b2cfdb298f130d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8525
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
This should make the canon pipeline gcroot the deps tarball, making it
less likely to be garbage-collected and rebuilt unnecessarily (which
usually incurs a hash change due to impurities).
Change-Id: I92a353d0f45056fffbc016c44a1ae05a63d76849
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8527
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
Autosubmit: sterni <sternenseemann@systemli.org>
* //3p/sources: Note that emacs-overlay is not updated for now, as
changes in emacs HEAD break //users/sterni/emacs.
* //3p/gerrit_plugins/code-owners: deps hash changed once again
or was no longer in the Nix store.
Unfortunately, building the deps derivations from scratch for gerrit
and the gerrit plugins no longer works due to a nixpkgs regression:
Due to a (operator precedence) mistake in the way the deps
derivation's installPhase is computed, it would append extra code to
the installPhase provided by us, causing a bash syntax error.
I have proposed a fix for this
upstream (<https://github.com/NixOS/nixpkgs/pull/228305>). Adding a
workaround in the repo would be possible, but a bit annoying. Since
the derivations are fixed output anyways, I've opted to build the
missing deps derivation (for code-owners) locally using the fixed
nixpkgs, updated the sha256 and copied the result into whitby's Nix
store. Hopefully by the next time we'll be rebuilding the deps
derivations again the fix will have propagated into the NixOS unstable
channel.
* //users/grfn/system/system:roswellSystem: Use mysql80 from stable.
See also https://github.com/NixOS/nixpkgs/issues/226673.
Change-Id: I9b9d57f589be4cdc3fd4f39729c170a25a655b74
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8483
Autosubmit: sterni <sternenseemann@systemli.org>
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Sets up a virtual machine image that is bootable on Yandex Cloud.
There are some slightly wonky behaviours still, like cloud-init
apparently putting all keys into root's authorized_keys no matter what
is specified in the metadata, but it does work now.
Change-Id: I57dcb7fcfa6872a28855dc1347f73a6db3c56828
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8496
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
This was a bit trickier than I anticipated, because there's no good
ways to avoid passing the credentials around manually.
What's basically happening now is that the credentials for the state
bucket are checked in (encrypted), and sourcing `creds.fish` uses the
cloud HSM to decrypt and load them into the environment.
Change-Id: I3f5ce1c9bd9d5efbf1013414f94771a09ea3a488
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8494
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
Doesn't actually contain any configuration yet, just setting up TF
with the right providers and so on.
Change-Id: Ia7128dd977b4ff69eebaa36c6cad6ac104cafcdb
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8492
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>