"content-address*ed*" derivation is misleading because all derivations
are *themselves* content-addressed. What may or may not be
content-addressed is not derivation itself, but the *output* of the
derivation.
The outputs are not *part* of the derivation (for then the derivation
wouldn't be complete before we built it) but rather separate entities
produced by the derivation.
"content-adddress*ed*" is not correctly because it can only describe
what the derivation *is*, and that is not what we are trying to do.
"content-address*ing*" is correct because it describes what the
derivation *does* --- it produces content-addressed data.
This is a big step documenting the store layer on its own, separately from the evaluator (and `builtins.derivation`).
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Curl creates sockets without setting FD_CLOEXEC/SOCK_CLOEXEC, this can
cause connections to remain open forever when using commands like `nix
shell`
This change sets the FD_CLOEXEC flag using a CURLOPT_SOCKOPTFUNCTION
callback.
This fixes dynamic derivations, reverting #9081.
I believe that this time around, #9052 is fixed. When I first rebased
this, tests were failing (which wasn't the case before). The cause of
those test failures were due to the crude job in which the outer goal
tried to exit with the inner goal's status.
Now, that error handling has been reworked to be more faithful. The exit
exit status and exception of the inner goal is returned by the outer
goal. The exception was what was causing the test failures, but I
believe it was not having the right error code (there is more than one
for failure) that caused #9081.
The only cost of doing things the "right way" was that I had to
introduce a hacky `preserveException` boolean. I don't like this, but,
then again, none of us like anything about how the scheduler works.
Issue #11927 is still there to clean everything up, subsuming the need
for any `preserveException` because I doubt we will be fishing
information out of state machines like this at all.
This reverts commit 8440afbed7.
Co-Authored-By: Eelco Dolstra <edolstra@gmail.com>
The main improvement is that the new message gives an example of a path
that is referenced, which should make it easier to track down. While
there, I also clarified the wording, saying exactly why the paths in
question were illegal.
(cherry picked from commit 4e5d1b281e)
The main improvement is that the new message gives an example of a path
that is referenced, which should make it easier to track down. While
there, I also clarified the wording, saying exactly why the paths in
question were illegal.
This does not include any automation for the release branch, but
is based on the configuration of https://github.com/NixOS/nix/pull/12349
pre-commit run -a nixfmt-rfc-style
I started getting these warnings `warning: download buffer is full; consider increasing the 'download-buffer-size' setting` but the documentation does not make it obvious what unit of measurement it accepts.
This allows RemoteStore::addMultipleToStore() to free the Source
objects early (and in particular the associated sinkToSource()
buffers). This should fix#7359. For example, memory consumption of
nix copy --derivation --to ssh-ng://localhost?remote-store=/tmp/nix --derivation --no-check-sigs \
/nix/store/4p9xmfgnvclqpii8pxqcwcvl9bxqy2xf-nixos-system-...drv
went from 353 MB to 74 MB.
During garbage collection we cache several things -- a set of known-dead
paths, a set of known-alive paths, and a map of paths to their derivers.
Currently they use STL maps and sets, which are ordered structures that
typically are backed by binary trees. Since we are putting pseudorandom
paths into these and looking them up by exact key, we don't need the
ordering, and we're paying a nontrivial cost per insertion.
The existing maps require O(n log n) memory and have O(log n) insertion
and lookup time.
We could instead use unordered maps, which are typically backed by
hashmaps. These require O(n) memory and have O(1) insertion and lookup
time.
On my system this appears to result in a dramatic speedup -- prior to
this patch I was able to delete 400k paths out of 9.5 million over the
course of 34.5 hours. After this patch the same result took 89 minutes.
This result should NOT be taken at face value because the two runs
aren't really comparable; in particular the first started when I had 9.5
million store paths and the seconcd started with 7.8 million, so we are
deleting a different set of paths starting from a much cleaner
filesystem. But I do think it's indicative.
Related: https://github.com/NixOS/nix/issues/9581
This fixes segfaults with nix copy when there was an error processing
addMultipleToStore.
Running with ASAN/TSAN pointed at an use-after-free with threads from
the pool accessing the graph declared in processGraph after the function
was exiting and destructing the variables.
It turns out that if there is an error before pool.process() is called,
for example while we are still enqueuing tasks, then pool.process()
isn't called and threads are still left to run.
By creating the pool last we ensure that it is stopped first before
running other destructors even if an exception happens early.
[ lix porting note: nix does not name threads so the patch has been
adapted to not pass thread name ]
Link: https://git.lix.systems/lix-project/lix/issues/618
Link: https://gerrit.lix.systems/c/lix/+/2355
Fix a footgun. In my case, I had a couple of build ("output")
directories sitting around.
rm -rf build-*
Was confused for a bit why a meson.build file was missing.
Probably also helps with autocompletion.
I tried meson-build-support first, but I had to add something like
a nix- prefix, in order to make meson happy. They've reserved the
meson- prefix.