Realisations are conceptually key-value pairs, mapping `DrvOutputs` (the
key) to information about that derivation output.
This separate the value type, which will be useful in maps, etc., where
we don't want to denormalize by including the key twice.
This matches similar changes for existing types:
| keyed | unkeyed |
|--------------------|------------------------|
| `ValidPathInfo` | `UnkeyedValidPathInfo` |
| `KeyedBuildResult` | `BuildResult` |
| `Realisation` | `UnkeyedRealisation` |
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
Realisations are conceptually key-value pairs, mapping `DrvOutputs` (the
key) to information about that derivation output.
This separate the value type, which will be useful in maps, etc., where
we don't want to denormalize by including the key twice.
This matches similar changes for existing types:
| keyed | unkeyed |
|--------------------|------------------------|
| `ValidPathInfo` | `UnkeyedValidPathInfo` |
| `KeyedBuildResult` | `BuildResult` |
| `Realisation` | `UnkeyedRealisation` |
This is sometimes easier / more performant to implement, and
independently it is also a more convenient interface for many callers.
The existing store-wide `getFSAccessor` is only used for
- `nix why-depends`
- the evaluator
I hope we can get rid of it for those, too, and then we have the option
of getting rid of the store-wide method.
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
`perf c2c` shows a lot of cacheline conflicts between purely read-only
Store methods (like `parseStorePath()`) and the Sync classes. So
allocate pathInfoCache separately to avoid that.
This caused RemoteStore::queryPathInfoUncached() to mark the
connection as invalid (see
RemoteStore::ConnectionHandle::~ConnectionHandle()), causing it to
disconnect and reconnect after every lookup of an invalid path. This
caused huge slowdowns in conjunction with
19f89eb684 and lazy-trees.
The problem with old code was that it used getUri for both the `diskCache`
as well as logging. This is really bad because it mixes the textual human
readable representation with the caching.
Also using getUri for the cache key is really problematic for the S3 store,
since it doesn't include the `endpoint` in the cache key, so it's totally broken.
This starts separating the logging / cache concerns by introducing a
`getHumanReadableURI` that should only be used for logging. The caching
logic now instead uses `getReference().render(/*withParams=*/false)` exclusively.
This would need to be fixed in follow-ups, because that's really fragile and
broken for some store types (but it was already broken before).
* It is tough to contribute to a project that doesn't use a formatter,
* It is extra hard to contribute to a project which has configured the formatter, but ignores it for some files
* Code formatting makes it harder to hide obscure / weird bugs by accident or on purpose,
Let's rip the bandaid off?
Note that PRs currently in flight should be able to be merged relatively easily by applying `clang-format` to their tip prior to merge.
For example, instead of doing
#include "nix/store-config.hh"
#include "nix/derived-path.hh"
Now do
#include "nix/store/config.hh"
#include "nix/store/derived-path.hh"
This was originally planned in the issue, and also recent requested by
Eelco.
Most of the change is purely mechanical. There is just one small
additional issue. See how, in the example above, we took this
opportunity to also turn `<comp>-config.hh` into `<comp>/config.hh`.
Well, there was already a `nix/util/config.{cc,hh}`. Even though there
is not a public configuration header for libutil (which also would be
called `nix/util/config.{cc,hh}`) that's still confusing, To avoid any
such confusion, we renamed that to `nix/util/configuration.{cc,hh}`.
Finally, note that the libflake headers already did this, so we didn't
need to do anything to them. We wouldn't want to mistakenly get
`nix/flake/flake/flake.hh`!
Progress on #7876
The short answer for why we need to do this is so we can consistently do
`#include "nix/..."`. Without this change, there are ways to still make
that work, but they are hacky, and they have downsides such as making it
harder to make sure headers from the wrong Nix library (e..g.
`libnixexpr` headers in `libnixutil`) aren't being used.
The C API alraedy used `nix_api_*`, so its headers are *not* put in
subdirectories accordingly.
Progress on #7876
We resisted doing this for a while because it would be annoying to not
have the header source file pairs close by / easy to change file
path/name from one to the other. But I am ameliorating that with
symlinks in the next commit.
"content-address*ed*" derivation is misleading because all derivations
are *themselves* content-addressed. What may or may not be
content-addressed is not derivation itself, but the *output* of the
derivation.
The outputs are not *part* of the derivation (for then the derivation
wouldn't be complete before we built it) but rather separate entities
produced by the derivation.
"content-adddress*ed*" is not correctly because it can only describe
what the derivation *is*, and that is not what we are trying to do.
"content-address*ing*" is correct because it describes what the
derivation *does* --- it produces content-addressed data.
This allows RemoteStore::addMultipleToStore() to free the Source
objects early (and in particular the associated sinkToSource()
buffers). This should fix#7359. For example, memory consumption of
nix copy --derivation --to ssh-ng://localhost?remote-store=/tmp/nix --derivation --no-check-sigs \
/nix/store/4p9xmfgnvclqpii8pxqcwcvl9bxqy2xf-nixos-system-...drv
went from 353 MB to 74 MB.
Since withFramedSink() is now used a lot more than in the past (for
every addToStore() variant), we were creating a lot of threads, e.g.
nix flake show --no-eval-cache --all-systems github:NixOS/nix/afdd12be5e19c0001ff3297dea544301108d298
would create 46418 threads. While threads on Linux are cheap, this is
still substantial overhead.
So instead, just poll from FramedSink before every write whether there
are pending messages from the daemon. This could slightly increase the
latency on log messages from the daemon, but not on exceptions (which
were only synchronously checked from FramedSink anyway).
This speeds up the command above from 19.2s to 17.5s on my machine (a
9% speedup).
Currently, the worker protocol has a version number that we increment
whenever we change something in the protocol. However, this can cause
a collision between Nix PRs / forks that make protocol changes
(e.g. PR #9857 increments the version, which could collide with
another PR). So instead, the client and daemon now exchange a set of
protocol features (such as `auth-forwarding`). They will use the
intersection of the sets of features, i.e. the features they both
support.
Note that protocol features are completely distinct from
`ExperimentalFeature`s.
The old `std::variant` is bad because we aren't adding a new case to
`FileIngestionMethod` so much as we are defining a separate concept ---
store object content addressing rather than file system object content
addressing. As such, it is more correct to just create a fresh
enumeration.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
This increases test coverage, and gets the worker protocol ready to be
used by Hydra.
Why don't we just try to use the store interface in Hydra? Well, the
problem is that the store interface works on connection pools, with each
opreation getting potentially a different connection, but the way temp
roots work requires that we keep one logical "transaction" (temp root
session) using the same connection.
The longer-term solution probably is making connections themselves
implement the store interface, but that is something that builds on
this, so I feel OK that this is not churn in the wrong direction.
Fixes#9584
Instead, serialize as NAR and send that over, then rehash sever side.
This is alorithmically simpler, but comes at the cost of a newer
parameter to `Store::addToStoreFromDump`.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
Part of RFC 133
Extracted from our old IPFS branches.
Co-Authored-By: Matthew Bauer <mjbauer95@gmail.com>
Co-Authored-By: Carlo Nucera <carlo.nucera@protonmail.com>
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Florian Klink <flokli@flokli.de>
No outward facing behavior is changed.
Older methods with same names that operate on on method + algo pair (for
old-style `<method>:algo`) are renamed to `*WithAlgo`.)
The functions are unit-tested in the same way the names for the hash
algorithms are tested.
This makes for more useful manual table of contents, that displays the
information at a glance.
The `nix help-stores` command is kept as-is, even though it will show up
in the manual with the same information as these pages due to the way it
is written as a "`--help`-style" command. Deciding what to do with that
command is left for a later PR.
This change also lists all store types at the top of the respective overview page.
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems
It does not belong with the data type itself.
This also materializes the fact that `copyPath` does not do any version
negotiation just just hard-codes "16".
The non-standard interface of these serializers makes it harder to test,
but this is fixed in the next commit which then adds those tests.
Worker Protocol:
Note that the worker protocol already had a serialization for
`BuildResult`; this was added in
a4604f1928. It didn't have any versioning
support because at that time reusable seralizers were not away for the protocol
version. It could thus only be used for new messages also introduced in
that commit.
Now that we do support versioning in reusable serializers, we can expand
it to support all known versions and use it in many more places.
The exist test data becomes the version 1.29 tests: note that those
files' contents are unchanged. 1.28 and 1.27 tests are added to cover
the older code-paths.
The keyered build result test only has 1.29 because the keying was also
added in a4604f1928; the older
serializations are always used unkeyed.
Serve Protocol:
Conversely, no attempt was made to factor out such a serializer for the
serve protocol, so our work there in this commit for that protocol
proceeds from scratch.
It was some ad-hoc functions to account for versions, while the already
factored-out serializer just supported the latest version.
Now, we can fold that version-specific logic into the factored out one,
and so we do.