Follow

“Getting bytes to disk more quickly”—the story of optimizing substitute download speed in :
guix.gnu.org/en/blog/2021/gett

@civodul Another potential lead would be to decrease the substitution granularity. Instead of subtituing a full NAR, we could think about chopping the NAR archive in smaller chunks then only substituing the updated chunks. This will require the introduction of a new "chunk store" and a chunk concatenation phase a bit more complex than a archive decompression though.

This kind of smaller granularity substitution could in theory help with mass rebuilds following a a core component update, (cf. yesterday's openssl bump). By breaking the direct dependency between the substitution artefact (here nar chunks) and the target store path, we also break the new input-adressed path propagation hell.

It provides some of the benefits we could get from an intensional store (no more mass path update) without introducing most of the associated complexity.

Some people in the nix community are experimenting this idea using casync [1] as a "chunk store" with some pretty encouraging first results.

[1] https://github.com/systemd/casync
@civodul Just to be clear, it won't prevent a mass rebuild or reduce the overall build farm load. It'll only help on the substitution side.

@Ninjatrappeur “Chunking” is a great idea! I agree that it’s one way to get the transport benefits of content addressability without using the intensional model.

The “digest” protocol linked at the end of the article is an experiment along these lines:
git.savannah.gnu.org/cgit/guix

@civodul oops, my bad. For some reason I skipped this paragraph, my apologies.

That's exciting news for those living behind these good old french rural 512kbps ADSL connections ;)

Looking forward to see how far you manage to go with that <3
Sign in to participate in the conversation
Mastodon (Aquilepouet)

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!