diff --git a/.gitignore b/.gitignore index 9c4691240..4782bfbaf 100644 --- a/.gitignore +++ b/.gitignore @@ -47,3 +47,6 @@ result-* .DS_Store flake-regressions + +# direnv +.direnv/ diff --git a/.mergify.yml b/.mergify.yml index 36ffe6e8b..8941711a9 100644 --- a/.mergify.yml +++ b/.mergify.yml @@ -139,3 +139,14 @@ pull_request_rules: labels: - automatic backport - merge-queue + + - name: backport patches to 2.29 + conditions: + - label=backport 2.29-maintenance + actions: + backport: + branches: + - "2.29-maintenance" + labels: + - automatic backport + - merge-queue diff --git a/.version b/.version index 69886179f..6a6900382 100644 --- a/.version +++ b/.version @@ -1 +1 @@ -2.29.1 +2.30.0 diff --git a/doc/manual/source/SUMMARY.md.in b/doc/manual/source/SUMMARY.md.in index 6c5aa16d5..00f231a6a 100644 --- a/doc/manual/source/SUMMARY.md.in +++ b/doc/manual/source/SUMMARY.md.in @@ -52,6 +52,7 @@ - [Tuning Cores and Jobs](advanced-topics/cores-vs-jobs.md) - [Verifying Build Reproducibility](advanced-topics/diff-hook.md) - [Using the `post-build-hook`](advanced-topics/post-build-hook.md) + - [Evaluation profiler](advanced-topics/eval-profiler.md) - [Command Reference](command-ref/index.md) - [Common Options](command-ref/opt-common.md) - [Common Environment Variables](command-ref/env-common.md) @@ -147,6 +148,7 @@ - [Release 3.0.0 (2025-03-04)](release-notes-determinate/rl-3.0.0.md) - [Nix Release Notes](release-notes/index.md) {{#include ./SUMMARY-rl-next.md}} + - [Release 2.30 (2025-07-07)](release-notes/rl-2.30.md) - [Release 2.29 (2025-05-14)](release-notes/rl-2.29.md) - [Release 2.28 (2025-04-02)](release-notes/rl-2.28.md) - [Release 2.27 (2025-03-03)](release-notes/rl-2.27.md) diff --git a/doc/manual/source/advanced-topics/eval-profiler.md b/doc/manual/source/advanced-topics/eval-profiler.md new file mode 100644 index 000000000..ed3848bb2 --- /dev/null +++ b/doc/manual/source/advanced-topics/eval-profiler.md @@ -0,0 +1,33 @@ +# Using the `eval-profiler` + +Nix evaluator supports [evaluation](@docroot@/language/evaluation.md) +[profiling]() +compatible with `flamegraph.pl`. The profiler samples the nix +function call stack at regular intervals. It can be enabled with the +[`eval-profiler`](@docroot@/command-ref/conf-file.md#conf-eval-profiler) +setting: + +```console +$ nix-instantiate "" -A hello --eval-profiler flamegraph +``` + +Stack sampling frequency and the output file path can be configured with +[`eval-profile-file`](@docroot@/command-ref/conf-file.md#conf-eval-profile-file) +and [`eval-profiler-frequency`](@docroot@/command-ref/conf-file.md#conf-eval-profiler-frequency). +By default the collected profile is saved to `nix.profile` file in the current working directory. + +The collected profile can be directly consumed by `flamegraph.pl`: + +```console +$ flamegraph.pl nix.profile > flamegraph.svg +``` + +The line information in the profile contains the location of the [call +site](https://en.wikipedia.org/wiki/Call_site) position and the name of the +function being called (when available). For example: + +``` +/nix/store/x9wnkly3k1gkq580m90jjn32q9f05q2v-source/pkgs/top-level/default.nix:167:5:primop import +``` + +Here `import` primop is called at `/nix/store/x9wnkly3k1gkq580m90jjn32q9f05q2v-source/pkgs/top-level/default.nix:167:5`. diff --git a/doc/manual/source/command-ref/nix-channel.md b/doc/manual/source/command-ref/nix-channel.md index bc0a90b11..a65ec97c5 100644 --- a/doc/manual/source/command-ref/nix-channel.md +++ b/doc/manual/source/command-ref/nix-channel.md @@ -59,6 +59,11 @@ This command has the following operations: Download the Nix expressions of subscribed channels and create a new generation. Update all channels if none is specified, and only those included in *names* otherwise. + > **Note** + > + > Downloaded channel contents are cached. + > Use `--tarball-ttl` or the [`tarball-ttl` configuration option](@docroot@/command-ref/conf-file.md#conf-tarball-ttl) to change the validity period of cached downloads. + - `--list-generations` Prints a list of all the current existing generations for the diff --git a/doc/manual/source/glossary.md b/doc/manual/source/glossary.md index e18324ad9..9e76ad37b 100644 --- a/doc/manual/source/glossary.md +++ b/doc/manual/source/glossary.md @@ -31,9 +31,22 @@ The industry term for storage and retrieval systems using [content addressing](#gloss-content-address). A Nix store also has [input addressing](#gloss-input-addressed-store-object), and metadata. +- [derivation]{#gloss-derivation} + + A derivation can be thought of as a [pure function](https://en.wikipedia.org/wiki/Pure_function) that produces new [store objects][store object] from existing store objects. + + Derivations are implemented as [operating system processes that run in a sandbox](@docroot@/store/building.md#builder-execution). + This sandbox by default only allows reading from store objects specified as inputs, and only allows writing to designated [outputs][output] to be [captured as store objects](@docroot@/store/building.md#processing-outputs). + + A derivation is typically specified as a [derivation expression] in the [Nix language], and [instantiated][instantiate] to a [store derivation]. + There are multiple ways of obtaining store objects from store derivatons, collectively called [realisation][realise]. + + [derivation]: #gloss-derivation + - [store derivation]{#gloss-store-derivation} - A single build task. + A [derivation] represented as a [store object]. + See [Store Derivation](@docroot@/store/derivation/index.md#store-derivation) for details. [store derivation]: #gloss-store-derivation @@ -57,10 +70,7 @@ - [derivation expression]{#gloss-derivation-expression} - A description of a [store derivation] in the Nix language. - The output(s) of a derivation are store objects. - Derivations are typically specified in Nix expressions using the [`derivation` primitive](./language/derivations.md). - These are translated into store layer *derivations* (implicitly by `nix-env` and `nix-build`, or explicitly by `nix-instantiate`). + A description of a [store derivation] using the [`derivation` primitive](./language/derivations.md) in the [Nix language]. [derivation expression]: #gloss-derivation-expression diff --git a/doc/manual/source/language/advanced-attributes.md b/doc/manual/source/language/advanced-attributes.md index a939847e1..34c3b636b 100644 --- a/doc/manual/source/language/advanced-attributes.md +++ b/doc/manual/source/language/advanced-attributes.md @@ -53,23 +53,13 @@ Derivations can declare some infrequently used optional attributes. - [`__structuredAttrs`]{#adv-attr-structuredAttrs}\ If the special attribute `__structuredAttrs` is set to `true`, the other derivation - attributes are serialised into a file in JSON format. The environment variable - `NIX_ATTRS_JSON_FILE` points to the exact location of that file both in a build - and a [`nix-shell`](../command-ref/nix-shell.md). This obviates the need for - [`passAsFile`](#adv-attr-passAsFile) since JSON files have no size restrictions, - unlike process environments. + attributes are serialised into a file in JSON format. - It also makes it possible to tweak derivation settings in a structured way; see - [`outputChecks`](#adv-attr-outputChecks) for example. + This obviates the need for [`passAsFile`](#adv-attr-passAsFile) since JSON files have no size restrictions, unlike process environments. + It also makes it possible to tweak derivation settings in a structured way; + see [`outputChecks`](#adv-attr-outputChecks) for example. - As a convenience to Bash builders, - Nix writes a script that initialises shell variables - corresponding to all attributes that are representable in Bash. The - environment variable `NIX_ATTRS_SH_FILE` points to the exact - location of the script, both in a build and a - [`nix-shell`](../command-ref/nix-shell.md). This includes non-nested - (associative) arrays. For example, the attribute `hardening.format = true` - ends up as the Bash associative array element `${hardening[format]}`. + See the [corresponding section in the derivation page](@docroot@/store/derivation/index.md#structured-attrs) for further details. > **Warning** > diff --git a/doc/manual/source/language/index.md b/doc/manual/source/language/index.md index 5bb939e18..1eb14e96d 100644 --- a/doc/manual/source/language/index.md +++ b/doc/manual/source/language/index.md @@ -1,6 +1,6 @@ # Nix Language -The Nix language is designed for conveniently creating and composing *derivations* – precise descriptions of how contents of existing files are used to derive new files. +The Nix language is designed for conveniently creating and composing [derivations](@docroot@/glossary.md#gloss-derivation) – precise descriptions of how contents of existing files are used to derive new files. > **Tip** > diff --git a/doc/manual/source/language/operators.md b/doc/manual/source/language/operators.md index dbf2441cb..ab74e8a99 100644 --- a/doc/manual/source/language/operators.md +++ b/doc/manual/source/language/operators.md @@ -196,7 +196,7 @@ All comparison operators are implemented in terms of `<`, and the following equi ## Logical implication -Equivalent to `!`*b1* `||` *b2*. +Equivalent to `!`*b1* `||` *b2* (or `if` *b1* `then` *b2* `else true`) [Logical implication]: #logical-implication diff --git a/doc/manual/source/language/syntax.md b/doc/manual/source/language/syntax.md index 08a64f684..85162db74 100644 --- a/doc/manual/source/language/syntax.md +++ b/doc/manual/source/language/syntax.md @@ -225,8 +225,8 @@ passed in first , e.g., ```nix let add = { __functor = self: x: x + self.x; }; - inc = add // { x = 1; }; -in inc 1 + inc = add // { x = 1; }; # inc is { x = 1; __functor = (...) } +in inc 1 # equivalent of `add.__functor add 1` i.e. `1 + self.x` ``` evaluates to `2`. This can be used to attach metadata to a function diff --git a/doc/manual/source/protocols/json/derivation.md b/doc/manual/source/protocols/json/derivation.md index 92956091a..2fc018c33 100644 --- a/doc/manual/source/protocols/json/derivation.md +++ b/doc/manual/source/protocols/json/derivation.md @@ -85,3 +85,7 @@ is a JSON object with the following fields: * `env`: The environment passed to the `builder`. + +* `structuredAttrs`: + [Strucutured Attributes](@docroot@/store/derivation/index.md#structured-attrs), only defined if the derivation contains them. + Structured attributes are JSON, and thus embedded as-is. diff --git a/doc/manual/source/release-notes/rl-2.24.md b/doc/manual/source/release-notes/rl-2.24.md index 0d6823a68..33fc0db03 100644 --- a/doc/manual/source/release-notes/rl-2.24.md +++ b/doc/manual/source/release-notes/rl-2.24.md @@ -284,7 +284,7 @@ `` is also known as the builtin derivation builder `builtin:fetchurl`. It's not to be confused with the evaluation-time function `builtins.fetchurl`, which was not affected by this issue. -# Contributors +## Contributors This release was made possible by the following 43 contributors: diff --git a/doc/manual/source/release-notes/rl-2.25.md b/doc/manual/source/release-notes/rl-2.25.md index 29e3e509c..cfde8b1ef 100644 --- a/doc/manual/source/release-notes/rl-2.25.md +++ b/doc/manual/source/release-notes/rl-2.25.md @@ -77,7 +77,7 @@ `` is also known as the builtin derivation builder `builtin:fetchurl`. It's not to be confused with the evaluation-time function `builtins.fetchurl`, which was not affected by this issue. -# Contributors +## Contributors This release was made possible by the following 58 contributors: diff --git a/doc/manual/source/release-notes/rl-2.26.md b/doc/manual/source/release-notes/rl-2.26.md index d2a890eb6..0c3df828f 100644 --- a/doc/manual/source/release-notes/rl-2.26.md +++ b/doc/manual/source/release-notes/rl-2.26.md @@ -76,7 +76,7 @@ - Evaluation caching now works for dirty Git workdirs [#11992](https://github.com/NixOS/nix/pull/11992) -# Contributors +## Contributors This release was made possible by the following 45 contributors: diff --git a/doc/manual/source/release-notes/rl-2.27.md b/doc/manual/source/release-notes/rl-2.27.md index 3643f7476..34da62525 100644 --- a/doc/manual/source/release-notes/rl-2.27.md +++ b/doc/manual/source/release-notes/rl-2.27.md @@ -47,7 +47,7 @@ blake3-34P4p+iZXcbbyB1i4uoF7eWCGcZHjmaRn6Y7QdynLwU= ``` -# Contributors +## Contributors This release was made possible by the following 21 contributors: diff --git a/doc/manual/source/release-notes/rl-2.28.md b/doc/manual/source/release-notes/rl-2.28.md index 6da09546e..93ea2cfde 100644 --- a/doc/manual/source/release-notes/rl-2.28.md +++ b/doc/manual/source/release-notes/rl-2.28.md @@ -82,7 +82,7 @@ This completes the infrastructure overhaul for the [RFC 132](https://github.com/ Although this change is not as critical, we figured it would be good to do this API change at the same time, also. Also note that we try to keep the C API compatible, but we decided to break this function because it was young and likely not in widespread use yet. This frees up time to make important progress on the rest of the C API. -# Contributors +## Contributors This earlier-than-usual release was made possible by the following 16 contributors: diff --git a/doc/manual/source/release-notes/rl-2.29.md b/doc/manual/source/release-notes/rl-2.29.md index ad63fff2f..b59d6d6f0 100644 --- a/doc/manual/source/release-notes/rl-2.29.md +++ b/doc/manual/source/release-notes/rl-2.29.md @@ -111,7 +111,7 @@ This fact is counterbalanced by the fact that most of those changes are bug fixe This in particular prevents parts of GCC 14's diagnostics from being improperly filtered away. -# Contributors +## Contributors This release was made possible by the following 40 contributors: diff --git a/doc/manual/source/release-notes/rl-2.30.md b/doc/manual/source/release-notes/rl-2.30.md new file mode 100644 index 000000000..34d3e5bab --- /dev/null +++ b/doc/manual/source/release-notes/rl-2.30.md @@ -0,0 +1,153 @@ +# Release 2.30.0 (2025-07-07) + +## Backward-incompatible changes and deprecations + +- [`build-dir`] no longer defaults to `$TMPDIR` + + The directory in which temporary build directories are created no longer defaults + to `TMPDIR` or `/tmp`, to avoid builders making their directories + world-accessible. This behavior allowed escaping the build sandbox and can + cause build impurities even when not used maliciously. We now default to `builds` + in `NIX_STATE_DIR` (which is `/nix/var/nix/builds` in the default configuration). + +- Deprecate manually making structured attrs using the `__json` attribute [#13220](https://github.com/NixOS/nix/pull/13220) + + The proper way to create a derivation using [structured attrs] in the Nix language is by using `__structuredAttrs = true` with [`builtins.derivation`]. + However, by exploiting how structured attrs are implementated, it has also been possible to create them by setting the `__json` environment variable to a serialized JSON string. + This sneaky alternative method is now deprecated, and may be disallowed in future versions of Nix. + + [structured attrs]: @docroot@/language/advanced-attributes.md#adv-attr-structuredAttrs + [`builtins.derivation`]: @docroot@/language/builtins.html#builtins-derivation + +- Rename `nix profile install` to [`nix profile add`] [#13224](https://github.com/NixOS/nix/pull/13224) + + The command `nix profile install` has been renamed to [`nix profile add`] (though the former is still available as an alias). This is because the verb "add" is a better antonym for the verb "remove" (i.e. `nix profile remove`). Nix also does not have install hooks or general behavior often associated with "installing". + +## Performance improvements + +This release has a number performance improvements, in particular: + +- Reduce the size of value from 24 to 16 bytes [#13407](https://github.com/NixOS/nix/pull/13407) + + This shaves off a very significant amount of memory used for evaluation (~20% percent reduction in maximum heap size and ~17% in total bytes). + +## Features + +- Add [stack sampling evaluation profiler] [#13220](https://github.com/NixOS/nix/pull/13220) + + The Nix evaluator now supports [stack sampling evaluation profiling](@docroot@/advanced-topics/eval-profiler.md) via the [`--eval-profiler flamegraph`] setting. + It outputs collapsed call stack information to the file specified by + [`--eval-profile-file`] (`nix.profile` by default) in a format directly consumable + by `flamegraph.pl` and compatible tools like [speedscope](https://speedscope.app/). + Sampling frequency can be configured via [`--eval-profiler-frequency`] (99 Hz by default). + + Unlike the existing [`--trace-function-calls`], this profiler includes the name of the function + being called when it's available. + +- [`nix repl`] prints which variables were loaded [#11406](https://github.com/NixOS/nix/pull/11406) + + Instead of `Added variables` it now prints the first 10 variables that were added to the global scope. + +- `nix flake archive`: Add [`--no-check-sigs`] option [#13277](https://github.com/NixOS/nix/pull/13277) + + This is useful when using [`nix flake archive`] with the destination set to a remote store. + +- Emit warnings for IFDs with [`trace-import-from-derivation`] option [#13279](https://github.com/NixOS/nix/pull/13279) + + While we have the setting [`allow-import-from-derivation`] to deny import-from-derivation (IFD), sometimes users would like to observe IFDs during CI processes to gradually phase out the idiom. The new setting `trace-import-from-derivation`, when set, logs a simple warning to the console. + +- `json-log-path` setting [#13003](https://github.com/NixOS/nix/pull/13003) + + New setting [`json-log-path`] that sends a copy of all Nix log messages (in JSON format) to a file or Unix domain socket. + +- Non-flake inputs now contain a `sourceInfo` attribute [#13164](https://github.com/NixOS/nix/issues/13164) [#13170](https://github.com/NixOS/nix/pull/13170) + + Flakes have always had a `sourceInfo` attribute which describes the source of the flake. + The `sourceInfo.outPath` is often identical to the flake's `outPath`. However, it can differ when the flake is located in a subdirectory of its source. + + Non-flake inputs (i.e. inputs with [`flake = false`]) can also be located at some path _within_ a wider source. + This usually happens when defining a relative path input within the same source as the parent flake, e.g. `inputs.foo.url = ./some-file.nix`. + Such relative inputs will now inherit their parent's `sourceInfo`. + + This also means it is now possible to use `?dir=subdir` on non-flake inputs. + + This iterates on the work done in 2.26 to improve relative path support ([#10089](https://github.com/NixOS/nix/pull/10089)), + and resolves a regression introduced in 2.28 relating to nested relative path inputs ([#13164](https://github.com/NixOS/nix/issues/13164)). + +## Miscellaneous changes + +- [`builtins.sort`] uses PeekSort [#12623](https://github.com/NixOS/nix/pull/12623) + + Previously it used libstdc++'s `std::stable_sort()`. However, that implementation is not reliable if the user-supplied comparison function is not a strict weak ordering. + +- Revert incomplete closure mixed download and build feature [#77](https://github.com/NixOS/nix/issues/77) [#12628](https://github.com/NixOS/nix/issues/12628) [#13176](https://github.com/NixOS/nix/pull/13176) + + Since Nix 1.3 ([commit `299141e`] in 2013) Nix has attempted to mix together upstream fresh builds and downstream substitutions when remote substuters contain an "incomplete closure" (have some store objects, but not the store objects they reference). + This feature is now removed. + + In the worst case, removing this feature could cause more building downstream, but it should not cause outright failures, since this is not happening for opaque store objects that we don't know how to build if we decide not to substitute. + In practice, however, we doubt even more building is very likely to happen. + Remote stores that are missing dependencies in arbitrary ways (e.g. corruption) don't seem to be very common. + + On the contrary, when remote stores fail to implement the [closure property](@docroot@/store/store-object.md#closure-property), it is usually an *intentional* choice on the part of the remote store, because it wishes to serve as an "overlay" store over another store, such as `https://cache.nixos.org`. + If an "incomplete closure" is encountered in that situation, the right fix is not to do some sort of "franken-building" as this feature implemented, but instead to make sure both substituters are enabled in the settings. + + (In the future, we should make it easier for remote stores to indicate this to clients, to catch settings that won't work in general before a missing dependency is actually encountered.) + +## Contributors + +This release was made possible by the following 32 contributors: + +- Cole Helbling [**(@cole-h)**](https://github.com/cole-h) +- Eelco Dolstra [**(@edolstra)**](https://github.com/edolstra) +- Egor Konovalov [**(@egorkonovalov)**](https://github.com/egorkonovalov) +- Farid Zakaria [**(@fzakaria)**](https://github.com/fzakaria) +- Graham Christensen [**(@grahamc)**](https://github.com/grahamc) +- gustavderdrache [**(@gustavderdrache)**](https://github.com/gustavderdrache) +- Gwenn Le Bihan [**(@gwennlbh)**](https://github.com/gwennlbh) +- h0nIg [**(@h0nIg)**](https://github.com/h0nIg) +- Jade Masker [**(@donottellmetonottellyou)**](https://github.com/donottellmetonottellyou) +- jayeshv [**(@jayeshv)**](https://github.com/jayeshv) +- Jeremy Fleischman [**(@jfly)**](https://github.com/jfly) +- John Ericson [**(@Ericson2314)**](https://github.com/Ericson2314) +- Jonas Chevalier [**(@zimbatm)**](https://github.com/zimbatm) +- Jörg Thalheim [**(@Mic92)**](https://github.com/Mic92) +- kstrafe [**(@kstrafe)**](https://github.com/kstrafe) +- Luc Perkins [**(@lucperkins)**](https://github.com/lucperkins) +- Matt Sturgeon [**(@MattSturgeon)**](https://github.com/MattSturgeon) +- Nikita Krasnov [**(@synalice)**](https://github.com/synalice) +- Peder Bergebakken Sundt [**(@pbsds)**](https://github.com/pbsds) +- pennae [**(@pennae)**](https://github.com/pennae) +- Philipp Otterbein +- Pol Dellaiera [**(@drupol)**](https://github.com/drupol) +- PopeRigby [**(@poperigby)**](https://github.com/poperigby) +- Raito Bezarius +- Robert Hensing [**(@roberth)**](https://github.com/roberth) +- Samuli Thomasson [**(@SimSaladin)**](https://github.com/SimSaladin) +- Sergei Zimmerman [**(@xokdvium)**](https://github.com/xokdvium) +- Seth Flynn [**(@getchoo)**](https://github.com/getchoo) +- Stefan Boca [**(@stefanboca)**](https://github.com/stefanboca) +- tomberek [**(@tomberek)**](https://github.com/tomberek) +- Tristan Ross [**(@RossComputerGuy)**](https://github.com/RossComputerGuy) +- Valentin Gagarin [**(@fricklerhandwerk)**](https://github.com/fricklerhandwerk) +- Vladimír Čunát [**(@vcunat)**](https://github.com/vcunat) +- Wolfgang Walther [**(@wolfgangwalther)**](https://github.com/wolfgangwalther) + + +[stack sampling evaluation profiler]: @docroot@/advanced-topics/eval-profiler.md +[`--eval-profiler`]: @docroot@/command-ref/conf-file.md#conf-eval-profiler +[`--eval-profiler flamegraph`]: @docroot@/command-ref/conf-file.md#conf-eval-profiler +[`--trace-function-calls`]: @docroot@/command-ref/conf-file.md#conf-trace-function-calls +[`--eval-profile-file`]: @docroot@/command-ref/conf-file.md#conf-eval-profile-file +[`--eval-profiler-frequency`]: @docroot@/command-ref/conf-file.md#conf-eval-profiler-frequency +[`build-dir`]: @docroot@/command-ref/conf-file.md#conf-build-dir +[`nix profile add`]: @docroot@/command-ref/new-cli/nix3-profile-add.md +[`nix repl`]: @docroot@/command-ref/new-cli/nix3-repl.md +[`nix flake archive`]: @docroot@/command-ref/new-cli/nix3-flake-archive.md +[`json-log-path`]: @docroot@/command-ref/conf-file.md#conf-json-log-path +[`trace-import-from-derivation`]: @docroot@/command-ref/conf-file.md#conf-trace-import-from-derivation +[`allow-import-from-derivation`]: @docroot@/command-ref/conf-file.md#conf-allow-import-from-derivation +[`builtins.sort`]: @docroot@/language/builtins.md#builtins-sort +[`flake = false`]: @docroot@/command-ref/new-cli/nix3-flake.md?highlight=false#flake-inputs +[`--no-check-sigs`]: @docroot@/command-ref/new-cli/nix3-flake-archive.md#opt-no-check-sigs +[commit `299141e`]: https://github.com/NixOS/nix/commit/299141ecbd08bae17013226dbeae71e842b4fdd7 diff --git a/doc/manual/source/store/derivation/index.md b/doc/manual/source/store/derivation/index.md index 911c28485..1687ad8c0 100644 --- a/doc/manual/source/store/derivation/index.md +++ b/doc/manual/source/store/derivation/index.md @@ -138,6 +138,17 @@ See [Wikipedia](https://en.wikipedia.org/wiki/Argv) for details. Environment variables which will be passed to the [builder](#builder) executable. +#### Structured Attributes {#structured-attrs} + +Nix also has special support for embedding JSON in the derivations. + +The environment variable `NIX_ATTRS_JSON_FILE` points to the exact location of that file both in a build and a [`nix-shell`](@docroot@/command-ref/nix-shell.md). + +As a convenience to Bash builders, Nix writes a script that initialises shell variables corresponding to all attributes that are representable in Bash. +The environment variable `NIX_ATTRS_SH_FILE` points to the exact location of the script, both in a build and a [`nix-shell`](@docroot@/command-ref/nix-shell.md). +This includes non-nested (associative) arrays. +For example, the attribute `hardening.format = true` ends up as the Bash associative array element `${hardening[format]}`. + ### Placeholders Placeholders are opaque values used within the [process creation fields] to [store objects] for which we don't yet know [store path]s. @@ -162,7 +173,7 @@ There are two types of placeholder, corresponding to the two cases where this pr > **Explanation** > -> In general, we need to realise [realise] a [store object] in order to be sure to have a store object for it. +> In general, we need to [realise] a [store object] in order to be sure to have a store object for it. > But for these two cases this is either impossible or impractical: > > - In the output case this is impossible: @@ -189,7 +200,7 @@ This ensures that there is a canonical [store path] used to refer to the derivat > **Note** > > Currently, the canonical encoding for every derivation is the "ATerm" format, -> but this is subject to change for types derivations which are not yet stable. +> but this is subject to change for the types of derivations which are not yet stable. Regardless of the format used, when serializing a derivation to a store object, that store object will be content-addressed. @@ -282,7 +293,7 @@ type DerivingPath = ConstantPath | OutputPath; Under this extended model, `DerivingPath`s are thus inductively built up from a root `ConstantPath`, wrapped with zero or more outer `OutputPath`s. -### Encoding {#deriving-path-encoding} +### Encoding {#deriving-path-encoding-higher-order} The encoding is adjusted in the natural way, encoding the `drv` field recursively using the same deriving path encoding. The result of this is that it is possible to have a chain of `^` at the end of the final string, as opposed to just a single one. diff --git a/doc/manual/source/store/store-object.md b/doc/manual/source/store/store-object.md index 115e107fb..10c2384fa 100644 --- a/doc/manual/source/store/store-object.md +++ b/doc/manual/source/store/store-object.md @@ -18,14 +18,14 @@ In particular, the edge corresponding to a reference is from the store object th References other than a self-reference must not form a cycle. The graph of references excluding self-references thus forms a [directed acyclic graph]. -[directed acyclic graph]: @docroot@/glossary.md#gloss-directed acyclic graph +[directed acyclic graph]: @docroot@/glossary.md#gloss-directed-acyclic-graph We can take the [transitive closure] of the references graph, which any pair of store objects have an edge not if there is a single reference from the first to the second, but a path of one or more references from the first to the second. The *requisites* of a store object are all store objects reachable by paths of references which start with given store object's references. [transitive closure]: https://en.wikipedia.org/wiki/Transitive_closure -We can also take the [transpose graph] ofthe references graph, where we reverse the orientation of all edges. +We can also take the [transpose graph] of the references graph, where we reverse the orientation of all edges. The *referrers* of a store object are the store objects that reference it. [transpose graph]: https://en.wikipedia.org/wiki/Transpose_graph diff --git a/docker.nix b/docker.nix index 6679fc8d9..ac047b4d6 100644 --- a/docker.nix +++ b/docker.nix @@ -1,6 +1,11 @@ { - pkgs ? import { }, - lib ? pkgs.lib, + # Core dependencies + pkgs, + lib, + dockerTools, + runCommand, + buildPackages, + # Image configuration name ? "nix", tag ? "latest", bundleNixpkgs ? true, @@ -14,36 +19,60 @@ gid ? 0, uname ? "root", gname ? "root", + Labels ? { + "org.opencontainers.image.title" = "Nix"; + "org.opencontainers.image.source" = "https://github.com/NixOS/nix"; + "org.opencontainers.image.vendor" = "Nix project"; + "org.opencontainers.image.version" = nix.version; + "org.opencontainers.image.description" = "Nix container image"; + }, + Cmd ? [ (lib.getExe bashInteractive) ], + # Default Packages + nix, + bashInteractive, + coreutils-full, + gnutar, + gzip, + gnugrep, + which, + curl, + less, + wget, + man, + cacert, + findutils, + iana-etc, + gitMinimal, + openssh, + # Other dependencies + shadow, }: let - defaultPkgs = - with pkgs; - [ - nix - bashInteractive - coreutils-full - gnutar - gzip - gnugrep - which - curl - less - wget - man - cacert.out - findutils - iana-etc - git - openssh - ] - ++ extraPkgs; + defaultPkgs = [ + nix + bashInteractive + coreutils-full + gnutar + gzip + gnugrep + which + curl + less + wget + man + cacert.out + findutils + iana-etc + gitMinimal + openssh + ] ++ extraPkgs; users = { root = { uid = 0; - shell = "${pkgs.bashInteractive}/bin/bash"; + shell = lib.getExe bashInteractive; home = "/root"; gid = 0; groups = [ "root" ]; @@ -52,7 +81,7 @@ let nobody = { uid = 65534; - shell = "${pkgs.shadow}/bin/nologin"; + shell = lib.getExe' shadow "nologin"; home = "/var/empty"; gid = 65534; groups = [ "nobody" ]; @@ -63,7 +92,7 @@ let // lib.optionalAttrs (uid != 0) { "${uname}" = { uid = uid; - shell = "${pkgs.bashInteractive}/bin/bash"; + shell = lib.getExe bashInteractive; home = "/home/${uname}"; gid = gid; groups = [ "${gname}" ]; @@ -147,41 +176,39 @@ let "${k}:x:${toString gid}:${lib.concatStringsSep "," members}"; groupContents = (lib.concatStringsSep "\n" (lib.attrValues (lib.mapAttrs groupToGroup groups))); - defaultNixConf = { - sandbox = "false"; + toConf = + with pkgs.lib.generators; + toKeyValue { + mkKeyValue = mkKeyValueDefault { + mkValueString = v: if lib.isList v then lib.concatStringsSep " " v else mkValueStringDefault { } v; + } " = "; + }; + + nixConfContents = toConf { + sandbox = false; build-users-group = "nixbld"; trusted-public-keys = [ "cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=" ]; }; - nixConfContents = - (lib.concatStringsSep "\n" ( - lib.mapAttrsFlatten ( - n: v: - let - vStr = if builtins.isList v then lib.concatStringsSep " " v else v; - in - "${n} = ${vStr}" - ) (defaultNixConf // nixConf) - )) - + "\n"; - userHome = if uid == 0 then "/root" else "/home/${uname}"; baseSystem = let nixpkgs = pkgs.path; - channel = pkgs.runCommand "channel-nixos" { inherit bundleNixpkgs; } '' + channel = runCommand "channel-nixos" { inherit bundleNixpkgs; } '' mkdir $out if [ "$bundleNixpkgs" ]; then - ln -s ${nixpkgs} $out/nixpkgs + ln -s ${ + builtins.path { + path = nixpkgs; + name = "source"; + } + } $out/nixpkgs echo "[]" > $out/manifest.nix fi ''; - rootEnv = pkgs.buildPackages.buildEnv { - name = "root-profile-env"; - paths = defaultPkgs; - }; - manifest = pkgs.buildPackages.runCommand "manifest.nix" { } '' + # doc/manual/source/command-ref/files/manifest.nix.md + manifest = buildPackages.runCommand "manifest.nix" { } '' cat > $out < $out${userHome}/.nix-channels + # may get replaced by pkgs.dockerTools.binSh & pkgs.dockerTools.usrBinEnv mkdir -p $out/bin $out/usr/bin - ln -s ${pkgs.coreutils}/bin/env $out/usr/bin/env - ln -s ${pkgs.bashInteractive}/bin/bash $out/bin/sh + ln -s ${lib.getExe' coreutils-full "env"} $out/usr/bin/env + ln -s ${lib.getExe bashInteractive} $out/bin/sh '' + (lib.optionalString (flake-registry-path != null) '' @@ -295,13 +330,13 @@ let globalFlakeRegistryPath="$nixCacheDir/flake-registry.json" ln -s ${flake-registry-path} $out$globalFlakeRegistryPath mkdir -p $out/nix/var/nix/gcroots/auto - rootName=$(${pkgs.nix}/bin/nix hash file --type sha1 --base32 <(echo -n $globalFlakeRegistryPath)) + rootName=$(${lib.getExe' nix "nix"} hash file --type sha1 --base32 <(echo -n $globalFlakeRegistryPath)) ln -s $globalFlakeRegistryPath $out/nix/var/nix/gcroots/auto/$rootName '') ); in -pkgs.dockerTools.buildLayeredImageWithNixDb { +dockerTools.buildLayeredImageWithNixDb { inherit name @@ -327,7 +362,7 @@ pkgs.dockerTools.buildLayeredImageWithNixDb { ''; config = { - Cmd = [ "${userHome}/.nix-profile/bin/bash" ]; + inherit Cmd Labels; User = "${toString uid}:${toString gid}"; Env = [ "USER=${uname}" diff --git a/flake.nix b/flake.nix index 0207134cd..6d9906b9f 100644 --- a/flake.nix +++ b/flake.nix @@ -172,19 +172,6 @@ }; nix = final.nixComponents2.nix-cli; - - # See https://github.com/NixOS/nixpkgs/pull/214409 - # Remove when fixed in this flake's nixpkgs - pre-commit = - if prev.stdenv.hostPlatform.system == "i686-linux" then - (prev.pre-commit.override (o: { - dotnet-sdk = ""; - })).overridePythonAttrs - (o: { - doCheck = false; - }) - else - prev.pre-commit; }; in @@ -230,28 +217,24 @@ This shouldn't build anything significant; just check that things (including derivations) are _set up_ correctly. */ - # Disabled due to a bug in `testEqualContents` (see - # https://github.com/NixOS/nix/issues/12690). - /* - packaging-overriding = - let - pkgs = nixpkgsFor.${system}.native; - nix = self.packages.${system}.nix; - in - assert (nix.appendPatches [ pkgs.emptyFile ]).libs.nix-util.src.patches == [ pkgs.emptyFile ]; - if pkgs.stdenv.buildPlatform.isDarwin then - lib.warn "packaging-overriding check currently disabled because of a permissions issue on macOS" pkgs.emptyFile - else - # If this fails, something might be wrong with how we've wired the scope, - # or something could be broken in Nixpkgs. - pkgs.testers.testEqualContents { - assertion = "trivial patch does not change source contents"; - expected = "${./.}"; - actual = - # Same for all components; nix-util is an arbitrary pick - (nix.appendPatches [ pkgs.emptyFile ]).libs.nix-util.src; - }; - */ + packaging-overriding = + let + pkgs = nixpkgsFor.${system}.native; + nix = self.packages.${system}.nix; + in + assert (nix.appendPatches [ pkgs.emptyFile ]).libs.nix-util.src.patches == [ pkgs.emptyFile ]; + if pkgs.stdenv.buildPlatform.isDarwin then + lib.warn "packaging-overriding check currently disabled because of a permissions issue on macOS" pkgs.emptyFile + else + # If this fails, something might be wrong with how we've wired the scope, + # or something could be broken in Nixpkgs. + pkgs.testers.testEqualContents { + assertion = "trivial patch does not change source contents"; + expected = "${./.}"; + actual = + # Same for all components; nix-util is an arbitrary pick + (nix.appendPatches [ pkgs.emptyFile ]).libs.nix-util.src; + }; } // (lib.optionalAttrs (builtins.elem system linux64BitSystems)) { dockerImage = self.hydraJobs.dockerImage.${system}; @@ -450,8 +433,7 @@ dockerImage = let pkgs = nixpkgsFor.${system}.native; - image = import ./docker.nix { - inherit pkgs; + image = pkgs.callPackage ./docker.nix { tag = pkgs.nix.version; }; in diff --git a/maintainers/data/release-credits-email-to-handle.json b/maintainers/data/release-credits-email-to-handle.json index 49d33cfdd..bf00b69bc 100644 --- a/maintainers/data/release-credits-email-to-handle.json +++ b/maintainers/data/release-credits-email-to-handle.json @@ -166,5 +166,24 @@ "the-tumultuous-unicorn-of-darkness@gmx.com": "TheTumultuousUnicornOfDarkness", "dev@rodney.id.au": "rvl", "pe@pijul.org": "P-E-Meunier", - "yannik@floxdev.com": "ysndr" + "yannik@floxdev.com": "ysndr", + "73017521+egorkonovalov@users.noreply.github.com": "egorkonovalov", + "raito@lix.systems": null, + "nikita.nikita.krasnov@gmail.com": "synalice", + "lucperkins@gmail.com": "lucperkins", + "vladimir.cunat@nic.cz": "vcunat", + "walther@technowledgy.de": "wolfgangwalther", + "jayesh.mail@gmail.com": "jayeshv", + "samuli.thomasson@pm.me": "SimSaladin", + "kevin@stravers.net": "kstrafe", + "poperigby@mailbox.org": "poperigby", + "cole.helbling@determinate.systems": "cole-h", + "donottellmetonottellyou@gmail.com": "donottellmetonottellyou", + "getchoo@tuta.io": "getchoo", + "alex.ford@determinate.systems": "gustavderdrache", + "stefan.r.boca@gmail.com": "stefanboca", + "gwenn.lebihan7@gmail.com": "gwennlbh", + "hey@ewen.works": "gwennlbh", + "matt@sturgeon.me.uk": "MattSturgeon", + "pbsds@hotmail.com": "pbsds" } diff --git a/maintainers/data/release-credits-handle-to-name.json b/maintainers/data/release-credits-handle-to-name.json index 968c51f58..40258300b 100644 --- a/maintainers/data/release-credits-handle-to-name.json +++ b/maintainers/data/release-credits-handle-to-name.json @@ -146,5 +146,21 @@ "ajlekcahdp4": "Alexander Romanov", "Valodim": "Vincent Breitmoser", "rvl": "Rodney Lorrimar", - "whatsthecraic": "Dean De Leo" + "whatsthecraic": "Dean De Leo", + "gwennlbh": "Gwenn Le Bihan", + "donottellmetonottellyou": "Jade Masker", + "kstrafe": null, + "synalice": "Nikita Krasnov", + "poperigby": "PopeRigby", + "MattSturgeon": "Matt Sturgeon", + "lucperkins": "Luc Perkins", + "gustavderdrache": null, + "SimSaladin": "Samuli Thomasson", + "getchoo": "Seth Flynn", + "stefanboca": "Stefan Boca", + "wolfgangwalther": "Wolfgang Walther", + "pbsds": "Peder Bergebakken Sundt", + "egorkonovalov": "Egor Konovalov", + "jayeshv": "jayeshv", + "vcunat": "Vladim\u00edr \u010cun\u00e1t" } diff --git a/maintainers/flake-module.nix b/maintainers/flake-module.nix index 6497b17c1..1058d6334 100644 --- a/maintainers/flake-module.nix +++ b/maintainers/flake-module.nix @@ -37,6 +37,118 @@ fi ''}"; }; + meson-format = { + enable = true; + files = "(meson.build|meson.options)$"; + entry = "${pkgs.writeScript "format-meson" '' + #!${pkgs.runtimeShell} + for file in "$@"; do + ${lib.getExe pkgs.meson} format -ic ${../meson.format} "$file" + done + ''}"; + excludes = [ + # We haven't applied formatting to these files yet + ''^doc/manual/meson.build$'' + ''^doc/manual/source/command-ref/meson.build$'' + ''^doc/manual/source/development/meson.build$'' + ''^doc/manual/source/language/meson.build$'' + ''^doc/manual/source/meson.build$'' + ''^doc/manual/source/release-notes/meson.build$'' + ''^doc/manual/source/store/meson.build$'' + ''^misc/bash/meson.build$'' + ''^misc/fish/meson.build$'' + ''^misc/launchd/meson.build$'' + ''^misc/meson.build$'' + ''^misc/systemd/meson.build$'' + ''^misc/zsh/meson.build$'' + ''^nix-meson-build-support/$'' + ''^nix-meson-build-support/big-objs/meson.build$'' + ''^nix-meson-build-support/common/meson.build$'' + ''^nix-meson-build-support/deps-lists/meson.build$'' + ''^nix-meson-build-support/export/meson.build$'' + ''^nix-meson-build-support/export-all-symbols/meson.build$'' + ''^nix-meson-build-support/generate-header/meson.build$'' + ''^nix-meson-build-support/libatomic/meson.build$'' + ''^nix-meson-build-support/subprojects/meson.build$'' + ''^scripts/meson.build$'' + ''^src/external-api-docs/meson.build$'' + ''^src/internal-api-docs/meson.build$'' + ''^src/libcmd/include/nix/cmd/meson.build$'' + ''^src/libcmd/meson.build$'' + ''^src/libcmd/nix-meson-build-support$'' + ''^src/libexpr/include/nix/expr/meson.build$'' + ''^src/libexpr/meson.build$'' + ''^src/libexpr/nix-meson-build-support$'' + ''^src/libexpr-c/meson.build$'' + ''^src/libexpr-c/nix-meson-build-support$'' + ''^src/libexpr-test-support/meson.build$'' + ''^src/libexpr-test-support/nix-meson-build-support$'' + ''^src/libexpr-tests/meson.build$'' + ''^src/libexpr-tests/nix-meson-build-support$'' + ''^src/libfetchers/include/nix/fetchers/meson.build$'' + ''^src/libfetchers/meson.build$'' + ''^src/libfetchers/nix-meson-build-support$'' + ''^src/libfetchers-c/meson.build$'' + ''^src/libfetchers-c/nix-meson-build-support$'' + ''^src/libfetchers-tests/meson.build$'' + ''^src/libfetchers-tests/nix-meson-build-support$'' + ''^src/libflake/include/nix/flake/meson.build$'' + ''^src/libflake/meson.build$'' + ''^src/libflake/nix-meson-build-support$'' + ''^src/libflake-c/meson.build$'' + ''^src/libflake-c/nix-meson-build-support$'' + ''^src/libflake-tests/meson.build$'' + ''^src/libflake-tests/nix-meson-build-support$'' + ''^src/libmain/include/nix/main/meson.build$'' + ''^src/libmain/meson.build$'' + ''^src/libmain/nix-meson-build-support$'' + ''^src/libmain-c/meson.build$'' + ''^src/libmain-c/nix-meson-build-support$'' + ''^src/libstore/include/nix/store/meson.build$'' + ''^src/libstore/meson.build$'' + ''^src/libstore/nix-meson-build-support$'' + ''^src/libstore/unix/include/nix/store/meson.build$'' + ''^src/libstore/unix/meson.build$'' + ''^src/libstore/windows/meson.build$'' + ''^src/libstore-c/meson.build$'' + ''^src/libstore-c/nix-meson-build-support$'' + ''^src/libstore-test-support/include/nix/store/tests/meson.build$'' + ''^src/libstore-test-support/meson.build$'' + ''^src/libstore-test-support/nix-meson-build-support$'' + ''^src/libstore-tests/meson.build$'' + ''^src/libstore-tests/nix-meson-build-support$'' + ''^src/libutil/meson.build$'' + ''^src/libutil/nix-meson-build-support$'' + ''^src/libutil/unix/include/nix/util/meson.build$'' + ''^src/libutil/unix/meson.build$'' + ''^src/libutil/windows/meson.build$'' + ''^src/libutil-c/meson.build$'' + ''^src/libutil-c/nix-meson-build-support$'' + ''^src/libutil-test-support/include/nix/util/tests/meson.build$'' + ''^src/libutil-test-support/meson.build$'' + ''^src/libutil-test-support/nix-meson-build-support$'' + ''^src/libutil-tests/meson.build$'' + ''^src/libutil-tests/nix-meson-build-support$'' + ''^src/nix/meson.build$'' + ''^src/nix/nix-meson-build-support$'' + ''^src/perl/lib/Nix/meson.build$'' + ''^src/perl/meson.build$'' + ''^tests/functional/ca/meson.build$'' + ''^tests/functional/common/meson.build$'' + ''^tests/functional/dyn-drv/meson.build$'' + ''^tests/functional/flakes/meson.build$'' + ''^tests/functional/git-hashing/meson.build$'' + ''^tests/functional/local-overlay-store/meson.build$'' + ''^tests/functional/meson.build$'' + ''^src/libcmd/meson.options$'' + ''^src/libexpr/meson.options$'' + ''^src/libstore/meson.options$'' + ''^src/libutil/meson.options$'' + ''^src/libutil-c/meson.options$'' + ''^src/nix/meson.options$'' + ''^src/perl/meson.options$'' + ]; + }; nixfmt-rfc-style = { enable = true; excludes = [ @@ -81,7 +193,6 @@ # We haven't applied formatting to these files yet ''^doc/manual/redirects\.js$'' ''^doc/manual/theme/highlight\.js$'' - ''^precompiled-headers\.h$'' ''^src/build-remote/build-remote\.cc$'' ''^src/libcmd/built-path\.cc$'' ''^src/libcmd/include/nix/cmd/built-path\.hh$'' @@ -145,7 +256,6 @@ ''^src/libexpr/include/nix/expr/value-to-json\.hh$'' ''^src/libexpr/value-to-xml\.cc$'' ''^src/libexpr/include/nix/expr/value-to-xml\.hh$'' - ''^src/libexpr/include/nix/expr/value\.hh$'' ''^src/libexpr/value/context\.cc$'' ''^src/libexpr/include/nix/expr/value/context\.hh$'' ''^src/libfetchers/attrs\.cc$'' @@ -276,6 +386,8 @@ ''^src/libstore/store-api\.cc$'' ''^src/libstore/include/nix/store/store-api\.hh$'' ''^src/libstore/include/nix/store/store-dir-config\.hh$'' + ''^src/libstore/build/derivation-building-goal\.cc$'' + ''^src/libstore/include/nix/store/build/derivation-building-goal\.hh$'' ''^src/libstore/build/derivation-goal\.cc$'' ''^src/libstore/include/nix/store/build/derivation-goal\.hh$'' ''^src/libstore/build/drv-output-substitution-goal\.cc$'' @@ -357,7 +469,7 @@ ''^src/libutil/json-utils\.cc$'' ''^src/libutil/include/nix/util/json-utils\.hh$'' ''^src/libutil/linux/cgroup\.cc$'' - ''^src/libutil/linux/namespaces\.cc$'' + ''^src/libutil/linux/linux-namespaces\.cc$'' ''^src/libutil/logging\.cc$'' ''^src/libutil/include/nix/util/logging\.hh$'' ''^src/libutil/memory-source-accessor\.cc$'' diff --git a/maintainers/release-notes b/maintainers/release-notes index 6586b22dc..5bb492227 100755 --- a/maintainers/release-notes +++ b/maintainers/release-notes @@ -157,7 +157,7 @@ section_title="Release $version_full ($DATE)" if ! $IS_PATCH; then echo - echo "# Contributors" + echo "## Contributors" echo VERSION=$version_full ./maintainers/release-credits fi diff --git a/meson.build b/meson.build index 9f7471143..4a3a517fb 100644 --- a/meson.build +++ b/meson.build @@ -1,13 +1,16 @@ # This is just a stub project to include all the others as subprojects # for development shell purposes -project('nix-dev-shell', 'cpp', +project( + 'nix-dev-shell', + 'cpp', version : files('.version'), subproject_dir : 'src', default_options : [ 'localstatedir=/nix/var', + # hack for trailing newline ], - meson_version : '>= 1.1' + meson_version : '>= 1.1', ) # Internal Libraries diff --git a/meson.format b/meson.format new file mode 100644 index 000000000..4876dd962 --- /dev/null +++ b/meson.format @@ -0,0 +1,7 @@ +indent_by = ' ' +space_array = true +kwargs_force_multiline = false +wide_colon = true +group_arg_value = true +indent_before_comments = ' ' +use_editor_config = true diff --git a/meson.options b/meson.options index 329fe06bf..30670902e 100644 --- a/meson.options +++ b/meson.options @@ -1,13 +1,22 @@ # vim: filetype=meson -option('doc-gen', type : 'boolean', value : false, +option( + 'doc-gen', + type : 'boolean', + value : false, description : 'Generate documentation', ) -option('unit-tests', type : 'boolean', value : true, +option( + 'unit-tests', + type : 'boolean', + value : true, description : 'Build unit tests', ) -option('bindings', type : 'boolean', value : true, +option( + 'bindings', + type : 'boolean', + value : true, description : 'Build language bindings (e.g. Perl)', ) diff --git a/misc/systemd/nix-daemon.conf.in b/misc/systemd/nix-daemon.conf.in index e7b264234..a0ddc4019 100644 --- a/misc/systemd/nix-daemon.conf.in +++ b/misc/systemd/nix-daemon.conf.in @@ -1 +1,2 @@ -d @localstatedir@/nix/daemon-socket 0755 root root - - +d @localstatedir@/nix/daemon-socket 0755 root root - - +d @localstatedir@/nix/builds 0755 root root 7d - diff --git a/nix-meson-build-support/export/meson.build b/nix-meson-build-support/export/meson.build index b2409de85..950bd9544 100644 --- a/nix-meson-build-support/export/meson.build +++ b/nix-meson-build-support/export/meson.build @@ -11,12 +11,18 @@ endforeach requires_public += deps_public extra_pkg_config_variables = get_variable('extra_pkg_config_variables', {}) + +extra_cflags = [] +if not meson.project_name().endswith('-c') + extra_cflags += ['-std=c++2a'] +endif + import('pkgconfig').generate( this_library, filebase : meson.project_name(), name : 'Nix', description : 'Nix Package Manager', - extra_cflags : ['-std=c++2a'], + extra_cflags : extra_cflags, requires : requires_public, requires_private : requires_private, libraries_private : libraries_private, diff --git a/nix-meson-build-support/windows-version/meson.build b/nix-meson-build-support/windows-version/meson.build index 3a008e5df..ed4caaa9a 100644 --- a/nix-meson-build-support/windows-version/meson.build +++ b/nix-meson-build-support/windows-version/meson.build @@ -2,5 +2,5 @@ if host_machine.system() == 'windows' # https://learn.microsoft.com/en-us/cpp/porting/modifying-winver-and-win32-winnt?view=msvc-170 # #define _WIN32_WINNT_WIN8 0x0602 # We currently don't use any API which requires higher than this. - add_project_arguments([ '-D_WIN32_WINNT=0x0602' ], language: 'cpp') + add_project_arguments([ '-D_WIN32_WINNT=0x0602' ], language : 'cpp') endif diff --git a/packaging/components.nix b/packaging/components.nix index 46e2d5851..89272200e 100644 --- a/packaging/components.nix +++ b/packaging/components.nix @@ -157,6 +157,17 @@ let outputs = prevAttrs.outputs or [ "out" ] ++ [ "dev" ]; }; + fixupStaticLayer = finalAttrs: prevAttrs: { + postFixup = + prevAttrs.postFixup or "" + + lib.optionalString (stdenv.hostPlatform.isStatic) '' + # HACK: Otherwise the result will have the entire buildInputs closure + # injected by the pkgsStatic stdenv + # + rm -f $out/nix-support/propagated-build-inputs + ''; + }; + # Work around weird `--as-needed` linker behavior with BSD, see # https://github.com/mesonbuild/meson/issues/3593 bsdNoLinkAsNeeded = @@ -292,6 +303,7 @@ in scope.sourceLayer setVersionLayer mesonLayer + fixupStaticLayer scope.mesonComponentOverrides ]; mkMesonExecutable = mkPackageBuilder [ @@ -301,6 +313,7 @@ in setVersionLayer mesonLayer mesonBuildLayer + fixupStaticLayer scope.mesonComponentOverrides ]; mkMesonLibrary = mkPackageBuilder [ @@ -311,6 +324,7 @@ in setVersionLayer mesonBuildLayer mesonLibraryLayer + fixupStaticLayer scope.mesonComponentOverrides ]; diff --git a/precompiled-headers.h b/precompiled-headers.h deleted file mode 100644 index e1a3f8cc0..000000000 --- a/precompiled-headers.h +++ /dev/null @@ -1,63 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include - -#include -#include -#include -#include -#include -#include -#include -#include - -#ifndef _WIN32 -# include -# include -# include -# include -# include -# include -# include -# include -# include -#endif - -#include diff --git a/scripts/nix-profile-daemon.fish.in b/scripts/nix-profile-daemon.fish.in index 3fb727151..1a20dffd2 100644 --- a/scripts/nix-profile-daemon.fish.in +++ b/scripts/nix-profile-daemon.fish.in @@ -1,17 +1,16 @@ # Only execute this file once per shell. -if test -z "$HOME" || \ - test -n "$__ETC_PROFILE_NIX_SOURCED" +if test -z "$HOME" || test -n "$__ETC_PROFILE_NIX_SOURCED" exit end -set --global __ETC_PROFILE_NIX_SOURCED 1 +set --global --export __ETC_PROFILE_NIX_SOURCED 1 # Local helpers function add_path --argument-names new_path if type -q fish_add_path # fish 3.2.0 or newer - fish_add_path --prepend --global $new_path + fish_add_path --prepend --global $new_path else # older versions of fish if not contains $new_path $fish_user_paths @@ -24,7 +23,33 @@ end # Set up the per-user profile. -set --local NIX_LINK $HOME/.nix-profile +set --local NIX_LINK "$HOME/.nix-profile" +set --local NIX_LINK_NEW +if test -n "$XDG_STATE_HOME" + set NIX_LINK_NEW "$XDG_STATE_HOME/nix/profile" +else + set NIX_LINK_NEW "$HOME/.local/state/nix/profile" +end +if test -e "$NIX_LINK_NEW" + if test -t 2; and test -e "$NIX_LINK" + set --local warning "\033[1;35mwarning:\033[0m " + printf "$warning Both %s and legacy %s exist; using the former.\n" "$NIX_LINK_NEW" "$NIX_LINK" 1>&2 + + if test (realpath "$NIX_LINK") = (realpath "$NIX_LINK_NEW") + printf " Since the profiles match, you can safely delete either of them.\n" 1>&2 + else + # This should be an exceptionally rare occasion: the only way to get it would be to + # 1. Update to newer Nix; + # 2. Remove .nix-profile; + # 3. Set the $NIX_LINK_NEW to something other than the default user profile; + # 4. Roll back to older Nix. + # If someone did all that, they can probably figure out how to migrate the profile. + printf "$warning Profiles do not match. You should manually migrate from %s to %s.\n" "$NIX_LINK" "$NIX_LINK_NEW" 1>&2 + end + end + + set NIX_LINK "$NIX_LINK_NEW" +end # Set up environment. # This part should be kept in sync with nixpkgs:nixos/modules/programs/environment.nix diff --git a/scripts/nix-profile.fish.in b/scripts/nix-profile.fish.in index 3fb727151..abf716cec 100644 --- a/scripts/nix-profile.fish.in +++ b/scripts/nix-profile.fish.in @@ -1,17 +1,16 @@ # Only execute this file once per shell. -if test -z "$HOME" || \ - test -n "$__ETC_PROFILE_NIX_SOURCED" +if test -z "$HOME" || test -n "$__ETC_PROFILE_NIX_SOURCED" exit end -set --global __ETC_PROFILE_NIX_SOURCED 1 +set --global --export __ETC_PROFILE_NIX_SOURCED 1 # Local helpers function add_path --argument-names new_path if type -q fish_add_path # fish 3.2.0 or newer - fish_add_path --prepend --global $new_path + fish_add_path --prepend --global $new_path else # older versions of fish if not contains $new_path $fish_user_paths @@ -24,7 +23,38 @@ end # Set up the per-user profile. -set --local NIX_LINK $HOME/.nix-profile +set --local NIX_LINK +if test -n "$NIX_STATE_HOME" + set NIX_LINK "$NIX_STATE_HOME/.nix-profile" +else + set NIX_LINK "$HOME/.nix-profile" + set --local NIX_LINK_NEW + if test -n "$XDG_STATE_HOME" + set NIX_LINK_NEW "$XDG_STATE_HOME/nix/profile" + else + set NIX_LINK_NEW "$HOME/.local/state/nix/profile" + end + if test -e "$NIX_LINK_NEW" + if test -t 2; and test -e "$NIX_LINK" + set --local warning "\033[1;35mwarning:\033[0m " + printf "$warning Both %s and legacy %s exist; using the former.\n" "$NIX_LINK_NEW" "$NIX_LINK" 1>&2 + + if test (realpath "$NIX_LINK") = (realpath "$NIX_LINK_NEW") + printf " Since the profiles match, you can safely delete either of them.\n" 1>&2 + else + # This should be an exceptionally rare occasion: the only way to get it would be to + # 1. Update to newer Nix; + # 2. Remove .nix-profile; + # 3. Set the $NIX_LINK_NEW to something other than the default user profile; + # 4. Roll back to older Nix. + # If someone did all that, they can probably figure out how to migrate the profile. + printf "$warning Profiles do not match. You should manually migrate from %s to %s.\n" "$NIX_LINK" "$NIX_LINK_NEW" 1>&2 + end + end + + set NIX_LINK "$NIX_LINK_NEW" + end +end # Set up environment. # This part should be kept in sync with nixpkgs:nixos/modules/programs/environment.nix diff --git a/src/libcmd/common-eval-args.cc b/src/libcmd/common-eval-args.cc index 01123a772..a183e6f0e 100644 --- a/src/libcmd/common-eval-args.cc +++ b/src/libcmd/common-eval-args.cc @@ -33,7 +33,12 @@ EvalSettings evalSettings { auto flakeRef = parseFlakeRef(fetchSettings, std::string { rest }, {}, true, false); debug("fetching flake search path element '%s''", rest); auto [accessor, lockedRef] = flakeRef.resolve(state.store).lazyFetch(state.store); - auto storePath = nix::fetchToStore(*state.store, SourcePath(accessor), FetchMode::Copy, lockedRef.input.getName()); + auto storePath = nix::fetchToStore( + state.fetchSettings, + *state.store, + SourcePath(accessor), + FetchMode::Copy, + lockedRef.input.getName()); state.allowPath(storePath); return state.storePath(storePath); }, @@ -176,14 +181,23 @@ SourcePath lookupFileArg(EvalState & state, std::string_view s, const Path * bas state.store, state.fetchSettings, EvalSettings::resolvePseudoUrl(s)); - auto storePath = fetchToStore(*state.store, SourcePath(accessor), FetchMode::Copy); + auto storePath = fetchToStore( + state.fetchSettings, + *state.store, + SourcePath(accessor), + FetchMode::Copy); return state.storePath(storePath); } else if (hasPrefix(s, "flake:")) { auto flakeRef = parseFlakeRef(fetchSettings, std::string(s.substr(6)), {}, true, false); auto [accessor, lockedRef] = flakeRef.resolve(state.store).lazyFetch(state.store); - auto storePath = nix::fetchToStore(*state.store, SourcePath(accessor), FetchMode::Copy, lockedRef.input.getName()); + auto storePath = nix::fetchToStore( + state.fetchSettings, + *state.store, + SourcePath(accessor), + FetchMode::Copy, + lockedRef.input.getName()); state.allowPath(storePath); return state.storePath(storePath); } diff --git a/src/libcmd/include/nix/cmd/command.hh b/src/libcmd/include/nix/cmd/command.hh index c14ed9bde..0455a1d3c 100644 --- a/src/libcmd/include/nix/cmd/command.hh +++ b/src/libcmd/include/nix/cmd/command.hh @@ -338,7 +338,7 @@ struct MixEnvironment : virtual Args StringSet keepVars; StringSet unsetVars; - std::map setVars; + StringMap setVars; bool ignoreEnvironment; MixEnvironment(); diff --git a/src/libcmd/include/nix/cmd/common-eval-args.hh b/src/libcmd/include/nix/cmd/common-eval-args.hh index 62af64230..88ede1ed7 100644 --- a/src/libcmd/include/nix/cmd/common-eval-args.hh +++ b/src/libcmd/include/nix/cmd/common-eval-args.hh @@ -22,17 +22,17 @@ class Bindings; namespace flake { struct Settings; } /** - * @todo Get rid of global setttings variables + * @todo Get rid of global settings variables */ extern fetchers::Settings fetchSettings; /** - * @todo Get rid of global setttings variables + * @todo Get rid of global settings variables */ extern EvalSettings evalSettings; /** - * @todo Get rid of global setttings variables + * @todo Get rid of global settings variables */ extern flake::Settings flakeSettings; diff --git a/src/libcmd/installable-value.cc b/src/libcmd/installable-value.cc index 4eb4993b1..f5a129205 100644 --- a/src/libcmd/installable-value.cc +++ b/src/libcmd/installable-value.cc @@ -45,7 +45,7 @@ ref InstallableValue::require(ref installable) std::optional InstallableValue::trySinglePathToDerivedPaths(Value & v, const PosIdx pos, std::string_view errorCtx) { if (v.type() == nPath) { - auto storePath = fetchToStore(*state->store, v.path(), FetchMode::Copy); + auto storePath = fetchToStore(state->fetchSettings, *state->store, v.path(), FetchMode::Copy); return {{ .path = DerivedPath::Opaque { .path = std::move(storePath), diff --git a/src/libcmd/repl-interacter.cc b/src/libcmd/repl-interacter.cc index 769935efa..4de335dd5 100644 --- a/src/libcmd/repl-interacter.cc +++ b/src/libcmd/repl-interacter.cc @@ -2,6 +2,8 @@ #include +#include + #if USE_READLINE #include #include diff --git a/src/libcmd/repl.cc b/src/libcmd/repl.cc index f9ac59d36..8170bd579 100644 --- a/src/libcmd/repl.cc +++ b/src/libcmd/repl.cc @@ -69,6 +69,7 @@ struct NixRepl const static int envSize = 32768; std::shared_ptr staticEnv; + Value lastLoaded; Env * env; int displ; StringSet varNames; @@ -95,6 +96,7 @@ struct NixRepl void loadFiles(); void loadFlakes(); void reloadFilesAndFlakes(); + void showLastLoaded(); void addAttrsToScope(Value & attrs); void addVarToScope(const Symbol name, Value & v); Expr * parseString(std::string s); @@ -158,6 +160,8 @@ static std::ostream & showDebugTrace(std::ostream & out, const PosTable & positi return out; } +MakeError(IncompleteReplExpr, ParseError); + static bool isFirstRepl = true; ReplExitStatus NixRepl::mainLoop() @@ -205,16 +209,8 @@ ReplExitStatus NixRepl::mainLoop() default: unreachable(); } - } catch (ParseError & e) { - if (e.msg().find("unexpected end of file") != std::string::npos) { - // For parse errors on incomplete input, we continue waiting for the next line of - // input without clearing the input so far. - continue; - } else { - printMsg(lvlError, e.msg()); - } - } catch (EvalError & e) { - printMsg(lvlError, e.msg()); + } catch (IncompleteReplExpr &) { + continue; } catch (Error & e) { printMsg(lvlError, e.msg()); } catch (Interrupted & e) { @@ -294,7 +290,7 @@ StringSet NixRepl::completePrefix(const std::string & prefix) } catch (BadURL & e) { // Quietly ignore BadURL flake-related errors. } catch (FileNotFound & e) { - // Quietly ignore non-existent file beeing `import`-ed. + // Quietly ignore non-existent file being `import`-ed. } } @@ -378,6 +374,7 @@ ProcessLineResult NixRepl::processLine(std::string line) << " current profile\n" << " :l, :load Load Nix expression and add it to scope\n" << " :lf, :load-flake Load Nix flake and add it to scope\n" + << " :ll, :last-loaded Show most recently loaded variables added to scope\n" << " :p, :print Evaluate and print expression recursively\n" << " Strings are printed directly, without escaping.\n" << " :q, :quit Exit nix-repl\n" @@ -468,6 +465,10 @@ ProcessLineResult NixRepl::processLine(std::string line) loadFlake(arg); } + else if (command == ":ll" || command == ":last-loaded") { + showLastLoaded(); + } + else if (command == ":r" || command == ":reload") { state->resetFileCache(); reloadFilesAndFlakes(); @@ -483,7 +484,7 @@ ProcessLineResult NixRepl::processLine(std::string line) auto path = state->coerceToPath(noPos, v, context, "while evaluating the filename to edit"); return {path, 0}; } else if (v.isLambda()) { - auto pos = state->positions[v.payload.lambda.fun->pos]; + auto pos = state->positions[v.lambda().fun->pos]; if (auto path = std::get_if(&pos.origin)) return {*path, pos.line}; else @@ -760,6 +761,16 @@ void NixRepl::initEnv() varNames.emplace(state->symbols[i.first]); } +void NixRepl::showLastLoaded() +{ + RunPager pager; + + for (auto & i : *lastLoaded.attrs()) { + std::string_view name = state->symbols[i.name]; + logger->cout(name); + } +} + void NixRepl::reloadFilesAndFlakes() { @@ -813,6 +824,27 @@ void NixRepl::addAttrsToScope(Value & attrs) staticEnv->sort(); staticEnv->deduplicate(); notice("Added %1% variables.", attrs.attrs()->size()); + + lastLoaded = attrs; + + const int max_print = 20; + int counter = 0; + std::ostringstream loaded; + for (auto & i : attrs.attrs()->lexicographicOrder(state->symbols)) { + if (counter >= max_print) + break; + + if (counter > 0) + loaded << ", "; + + printIdentifier(loaded, state->symbols[i->name]); + counter += 1; + } + + notice("%1%", loaded.str()); + + if (attrs.attrs()->size() > max_print) + notice("... and %1% more; view with :ll", attrs.attrs()->size() - max_print); } @@ -837,7 +869,17 @@ Expr * NixRepl::parseString(std::string s) void NixRepl::evalString(std::string s, Value & v) { - Expr * e = parseString(s); + Expr * e; + try { + e = parseString(s); + } catch (ParseError & e) { + if (e.msg().find("unexpected end of file") != std::string::npos) + // For parse errors on incomplete input, we continue waiting for the next line of + // input without clearing the input so far. + throw IncompleteReplExpr(e.msg()); + else + throw; + } e->eval(*state, *env, v); state->forceValue(v, v.determinePos(noPos)); } diff --git a/src/libexpr-c/nix_api_value.cc b/src/libexpr-c/nix_api_value.cc index 298d94845..fb90e2872 100644 --- a/src/libexpr-c/nix_api_value.cc +++ b/src/libexpr-c/nix_api_value.cc @@ -252,7 +252,7 @@ const char * nix_get_path_string(nix_c_context * context, const nix_value * valu // We could use v.path().to_string().c_str(), but I'm concerned this // crashes. Looks like .path() allocates a CanonPath with a copy of the // string, then it gets the underlying data from that. - return v.payload.path.path; + return v.pathStr(); } NIXC_CATCH_ERRS_NULL } @@ -324,7 +324,7 @@ nix_value * nix_get_list_byidx(nix_c_context * context, const nix_value * value, try { auto & v = check_value_in(value); assert(v.type() == nix::nList); - auto * p = v.listElems()[ix]; + auto * p = v.listView()[ix]; nix_gc_incref(nullptr, p); if (p != nullptr) state->state.forceValue(*p, nix::noPos); diff --git a/src/libexpr-test-support/include/nix/expr/tests/meson.build b/src/libexpr-test-support/include/nix/expr/tests/meson.build index 710bd8d4e..84ec401ab 100644 --- a/src/libexpr-test-support/include/nix/expr/tests/meson.build +++ b/src/libexpr-test-support/include/nix/expr/tests/meson.build @@ -1,9 +1,10 @@ # Public headers directory -include_dirs = [include_directories('../../..')] +include_dirs = [ include_directories('../../..') ] headers = files( 'libexpr.hh', 'nix_api_expr.hh', 'value/context.hh', + # hack for trailing newline ) diff --git a/src/libexpr-tests/error_traces.cc b/src/libexpr-tests/error_traces.cc index a7522278d..32e49efe6 100644 --- a/src/libexpr-tests/error_traces.cc +++ b/src/libexpr-tests/error_traces.cc @@ -458,7 +458,7 @@ namespace nix { HintFmt("expected a function but found %s: %s", "a list", Uncolored("[ ]")), HintFmt("while evaluating the first argument passed to builtins.filterSource")); - // Usupported by store "dummy" + // Unsupported by store "dummy" // ASSERT_TRACE2("filterSource (_: 1) ./.", // TypeError, @@ -636,7 +636,7 @@ namespace nix { HintFmt("expected a set but found %s: %s", "a list", Uncolored("[ ]")), HintFmt("while evaluating the second argument passed to builtins.mapAttrs")); - // XXX: defered + // XXX: deferred // ASSERT_TRACE2("mapAttrs \"\" { foo.bar = 1; }", // TypeError, // HintFmt("attempt to call something which is not a function but %s", "a string"), @@ -666,9 +666,9 @@ namespace nix { HintFmt("expected a set but found %s: %s", "an integer", Uncolored(ANSI_CYAN "1" ANSI_NORMAL)), HintFmt("while evaluating a value of the list passed as second argument to builtins.zipAttrsWith")); - // XXX: How to properly tell that the fucntion takes two arguments ? + // XXX: How to properly tell that the function takes two arguments ? // The same question also applies to sort, and maybe others. - // Due to lazyness, we only create a thunk, and it fails later on. + // Due to laziness, we only create a thunk, and it fails later on. // ASSERT_TRACE2("zipAttrsWith (_: 1) [ { foo = 1; } ]", // TypeError, // HintFmt("attempt to call something which is not a function but %s", "an integer"), @@ -877,7 +877,7 @@ namespace nix { HintFmt("expected a function but found %s: %s", "an integer", Uncolored(ANSI_CYAN "1" ANSI_NORMAL)), HintFmt("while evaluating the first argument passed to builtins.genList")); - // XXX: defered + // XXX: deferred // ASSERT_TRACE2("genList (x: x + \"foo\") 2 #TODO", // TypeError, // HintFmt("cannot add %s to an integer", "a string"), diff --git a/src/libexpr-tests/meson.build b/src/libexpr-tests/meson.build index f7822edfd..35ae8a9d0 100644 --- a/src/libexpr-tests/meson.build +++ b/src/libexpr-tests/meson.build @@ -32,8 +32,8 @@ deps_private += rapidcheck gtest = dependency('gtest') deps_private += gtest -gtest = dependency('gmock') -deps_private += gtest +gmock = dependency('gmock') +deps_private += gmock configdata = configuration_data() configdata.set_quoted('PACKAGE_VERSION', meson.project_version()) diff --git a/src/libexpr-tests/primops.cc b/src/libexpr-tests/primops.cc index 7695a587a..9b5590d8d 100644 --- a/src/libexpr-tests/primops.cc +++ b/src/libexpr-tests/primops.cc @@ -150,8 +150,8 @@ namespace nix { TEST_F(PrimOpTest, attrValues) { auto v = eval("builtins.attrValues { x = \"foo\"; a = 1; }"); ASSERT_THAT(v, IsListOfSize(2)); - ASSERT_THAT(*v.listElems()[0], IsIntEq(1)); - ASSERT_THAT(*v.listElems()[1], IsStringEq("foo")); + ASSERT_THAT(*v.listView()[0], IsIntEq(1)); + ASSERT_THAT(*v.listView()[1], IsStringEq("foo")); } TEST_F(PrimOpTest, getAttr) { @@ -250,8 +250,8 @@ namespace nix { TEST_F(PrimOpTest, catAttrs) { auto v = eval("builtins.catAttrs \"a\" [{a = 1;} {b = 0;} {a = 2;}]"); ASSERT_THAT(v, IsListOfSize(2)); - ASSERT_THAT(*v.listElems()[0], IsIntEq(1)); - ASSERT_THAT(*v.listElems()[1], IsIntEq(2)); + ASSERT_THAT(*v.listView()[0], IsIntEq(1)); + ASSERT_THAT(*v.listView()[1], IsIntEq(2)); } TEST_F(PrimOpTest, functionArgs) { @@ -301,6 +301,7 @@ namespace nix { TEST_F(PrimOpTest, elemtAtOutOfBounds) { ASSERT_THROW(eval("builtins.elemAt [0 1 2 3] 5"), Error); + ASSERT_THROW(eval("builtins.elemAt [0] 4294967296"), Error); } TEST_F(PrimOpTest, head) { @@ -319,7 +320,8 @@ namespace nix { TEST_F(PrimOpTest, tail) { auto v = eval("builtins.tail [ 3 2 1 0 ]"); ASSERT_THAT(v, IsListOfSize(3)); - for (const auto [n, elem] : enumerate(v.listItems())) + auto listView = v.listView(); + for (const auto [n, elem] : enumerate(listView)) ASSERT_THAT(*elem, IsIntEq(2 - static_cast(n))); } @@ -330,17 +332,17 @@ namespace nix { TEST_F(PrimOpTest, map) { auto v = eval("map (x: \"foo\" + x) [ \"bar\" \"bla\" \"abc\" ]"); ASSERT_THAT(v, IsListOfSize(3)); - auto elem = v.listElems()[0]; + auto elem = v.listView()[0]; ASSERT_THAT(*elem, IsThunk()); state.forceValue(*elem, noPos); ASSERT_THAT(*elem, IsStringEq("foobar")); - elem = v.listElems()[1]; + elem = v.listView()[1]; ASSERT_THAT(*elem, IsThunk()); state.forceValue(*elem, noPos); ASSERT_THAT(*elem, IsStringEq("foobla")); - elem = v.listElems()[2]; + elem = v.listView()[2]; ASSERT_THAT(*elem, IsThunk()); state.forceValue(*elem, noPos); ASSERT_THAT(*elem, IsStringEq("fooabc")); @@ -349,7 +351,7 @@ namespace nix { TEST_F(PrimOpTest, filter) { auto v = eval("builtins.filter (x: x == 2) [ 3 2 3 2 3 2 ]"); ASSERT_THAT(v, IsListOfSize(3)); - for (const auto elem : v.listItems()) + for (const auto elem : v.listView()) ASSERT_THAT(*elem, IsIntEq(2)); } @@ -366,7 +368,8 @@ namespace nix { TEST_F(PrimOpTest, concatLists) { auto v = eval("builtins.concatLists [[1 2] [3 4]]"); ASSERT_THAT(v, IsListOfSize(4)); - for (const auto [i, elem] : enumerate(v.listItems())) + auto listView = v.listView(); + for (const auto [i, elem] : enumerate(listView)) ASSERT_THAT(*elem, IsIntEq(static_cast(i)+1)); } @@ -404,7 +407,8 @@ namespace nix { auto v = eval("builtins.genList (x: x + 1) 3"); ASSERT_EQ(v.type(), nList); ASSERT_EQ(v.listSize(), 3u); - for (const auto [i, elem] : enumerate(v.listItems())) { + auto listView = v.listView(); + for (const auto [i, elem] : enumerate(listView)) { ASSERT_THAT(*elem, IsThunk()); state.forceValue(*elem, noPos); ASSERT_THAT(*elem, IsIntEq(static_cast(i)+1)); @@ -417,7 +421,8 @@ namespace nix { ASSERT_EQ(v.listSize(), 6u); const std::vector numbers = { 42, 77, 147, 249, 483, 526 }; - for (const auto [n, elem] : enumerate(v.listItems())) + auto listView = v.listView(); + for (const auto [n, elem] : enumerate(listView)) ASSERT_THAT(*elem, IsIntEq(numbers[n])); } @@ -428,17 +433,17 @@ namespace nix { auto right = v.attrs()->get(createSymbol("right")); ASSERT_NE(right, nullptr); ASSERT_THAT(*right->value, IsListOfSize(2)); - ASSERT_THAT(*right->value->listElems()[0], IsIntEq(23)); - ASSERT_THAT(*right->value->listElems()[1], IsIntEq(42)); + ASSERT_THAT(*right->value->listView()[0], IsIntEq(23)); + ASSERT_THAT(*right->value->listView()[1], IsIntEq(42)); auto wrong = v.attrs()->get(createSymbol("wrong")); ASSERT_NE(wrong, nullptr); ASSERT_EQ(wrong->value->type(), nList); ASSERT_EQ(wrong->value->listSize(), 3u); ASSERT_THAT(*wrong->value, IsListOfSize(3)); - ASSERT_THAT(*wrong->value->listElems()[0], IsIntEq(1)); - ASSERT_THAT(*wrong->value->listElems()[1], IsIntEq(9)); - ASSERT_THAT(*wrong->value->listElems()[2], IsIntEq(3)); + ASSERT_THAT(*wrong->value->listView()[0], IsIntEq(1)); + ASSERT_THAT(*wrong->value->listView()[1], IsIntEq(9)); + ASSERT_THAT(*wrong->value->listView()[2], IsIntEq(3)); } TEST_F(PrimOpTest, concatMap) { @@ -447,7 +452,8 @@ namespace nix { ASSERT_EQ(v.listSize(), 6u); const std::vector numbers = { 1, 2, 0, 3, 4, 0 }; - for (const auto [n, elem] : enumerate(v.listItems())) + auto listView = v.listView(); + for (const auto [n, elem] : enumerate(listView)) ASSERT_THAT(*elem, IsIntEq(numbers[n])); } @@ -592,6 +598,16 @@ namespace nix { ASSERT_THAT(v, IsStringEq("n")); } + TEST_F(PrimOpTest, substringHugeStart){ + auto v = eval("builtins.substring 4294967296 5 \"nixos\""); + ASSERT_THAT(v, IsStringEq("")); + } + + TEST_F(PrimOpTest, substringHugeLength){ + auto v = eval("builtins.substring 0 4294967296 \"nixos\""); + ASSERT_THAT(v, IsStringEq("nixos")); + } + TEST_F(PrimOpTest, substringEmptyString){ auto v = eval("builtins.substring 1 3 \"\""); ASSERT_THAT(v, IsStringEq("")); @@ -656,8 +672,8 @@ namespace nix { auto v = eval("derivation"); ASSERT_EQ(v.type(), nFunction); ASSERT_TRUE(v.isLambda()); - ASSERT_NE(v.payload.lambda.fun, nullptr); - ASSERT_TRUE(v.payload.lambda.fun->hasFormals()); + ASSERT_NE(v.lambda().fun, nullptr); + ASSERT_TRUE(v.lambda().fun->hasFormals()); } TEST_F(PrimOpTest, currentTime) { @@ -671,7 +687,8 @@ namespace nix { ASSERT_THAT(v, IsListOfSize(4)); const std::vector strings = { "1", "2", "3", "git" }; - for (const auto [n, p] : enumerate(v.listItems())) + auto listView = v.listView(); + for (const auto [n, p] : enumerate(listView)) ASSERT_THAT(*p, IsStringEq(strings[n])); } @@ -761,12 +778,12 @@ namespace nix { auto v = eval("builtins.split \"(a)b\" \"abc\""); ASSERT_THAT(v, IsListOfSize(3)); - ASSERT_THAT(*v.listElems()[0], IsStringEq("")); + ASSERT_THAT(*v.listView()[0], IsStringEq("")); - ASSERT_THAT(*v.listElems()[1], IsListOfSize(1)); - ASSERT_THAT(*v.listElems()[1]->listElems()[0], IsStringEq("a")); + ASSERT_THAT(*v.listView()[1], IsListOfSize(1)); + ASSERT_THAT(*v.listView()[1]->listView()[0], IsStringEq("a")); - ASSERT_THAT(*v.listElems()[2], IsStringEq("c")); + ASSERT_THAT(*v.listView()[2], IsStringEq("c")); } TEST_F(PrimOpTest, split2) { @@ -774,17 +791,17 @@ namespace nix { auto v = eval("builtins.split \"([ac])\" \"abc\""); ASSERT_THAT(v, IsListOfSize(5)); - ASSERT_THAT(*v.listElems()[0], IsStringEq("")); + ASSERT_THAT(*v.listView()[0], IsStringEq("")); - ASSERT_THAT(*v.listElems()[1], IsListOfSize(1)); - ASSERT_THAT(*v.listElems()[1]->listElems()[0], IsStringEq("a")); + ASSERT_THAT(*v.listView()[1], IsListOfSize(1)); + ASSERT_THAT(*v.listView()[1]->listView()[0], IsStringEq("a")); - ASSERT_THAT(*v.listElems()[2], IsStringEq("b")); + ASSERT_THAT(*v.listView()[2], IsStringEq("b")); - ASSERT_THAT(*v.listElems()[3], IsListOfSize(1)); - ASSERT_THAT(*v.listElems()[3]->listElems()[0], IsStringEq("c")); + ASSERT_THAT(*v.listView()[3], IsListOfSize(1)); + ASSERT_THAT(*v.listView()[3]->listView()[0], IsStringEq("c")); - ASSERT_THAT(*v.listElems()[4], IsStringEq("")); + ASSERT_THAT(*v.listView()[4], IsStringEq("")); } TEST_F(PrimOpTest, split3) { @@ -792,36 +809,36 @@ namespace nix { ASSERT_THAT(v, IsListOfSize(5)); // First list element - ASSERT_THAT(*v.listElems()[0], IsStringEq("")); + ASSERT_THAT(*v.listView()[0], IsStringEq("")); // 2nd list element is a list [ "" null ] - ASSERT_THAT(*v.listElems()[1], IsListOfSize(2)); - ASSERT_THAT(*v.listElems()[1]->listElems()[0], IsStringEq("a")); - ASSERT_THAT(*v.listElems()[1]->listElems()[1], IsNull()); + ASSERT_THAT(*v.listView()[1], IsListOfSize(2)); + ASSERT_THAT(*v.listView()[1]->listView()[0], IsStringEq("a")); + ASSERT_THAT(*v.listView()[1]->listView()[1], IsNull()); // 3rd element - ASSERT_THAT(*v.listElems()[2], IsStringEq("b")); + ASSERT_THAT(*v.listView()[2], IsStringEq("b")); // 4th element is a list: [ null "c" ] - ASSERT_THAT(*v.listElems()[3], IsListOfSize(2)); - ASSERT_THAT(*v.listElems()[3]->listElems()[0], IsNull()); - ASSERT_THAT(*v.listElems()[3]->listElems()[1], IsStringEq("c")); + ASSERT_THAT(*v.listView()[3], IsListOfSize(2)); + ASSERT_THAT(*v.listView()[3]->listView()[0], IsNull()); + ASSERT_THAT(*v.listView()[3]->listView()[1], IsStringEq("c")); // 5th element is the empty string - ASSERT_THAT(*v.listElems()[4], IsStringEq("")); + ASSERT_THAT(*v.listView()[4], IsStringEq("")); } TEST_F(PrimOpTest, split4) { auto v = eval("builtins.split \"([[:upper:]]+)\" \" FOO \""); ASSERT_THAT(v, IsListOfSize(3)); - auto first = v.listElems()[0]; - auto second = v.listElems()[1]; - auto third = v.listElems()[2]; + auto first = v.listView()[0]; + auto second = v.listView()[1]; + auto third = v.listView()[2]; ASSERT_THAT(*first, IsStringEq(" ")); ASSERT_THAT(*second, IsListOfSize(1)); - ASSERT_THAT(*second->listElems()[0], IsStringEq("FOO")); + ASSERT_THAT(*second->listView()[0], IsStringEq("FOO")); ASSERT_THAT(*third, IsStringEq(" ")); } @@ -839,14 +856,14 @@ namespace nix { TEST_F(PrimOpTest, match3) { auto v = eval("builtins.match \"a(b)(c)\" \"abc\""); ASSERT_THAT(v, IsListOfSize(2)); - ASSERT_THAT(*v.listElems()[0], IsStringEq("b")); - ASSERT_THAT(*v.listElems()[1], IsStringEq("c")); + ASSERT_THAT(*v.listView()[0], IsStringEq("b")); + ASSERT_THAT(*v.listView()[1], IsStringEq("c")); } TEST_F(PrimOpTest, match4) { auto v = eval("builtins.match \"[[:space:]]+([[:upper:]]+)[[:space:]]+\" \" FOO \""); ASSERT_THAT(v, IsListOfSize(1)); - ASSERT_THAT(*v.listElems()[0], IsStringEq("FOO")); + ASSERT_THAT(*v.listView()[0], IsStringEq("FOO")); } TEST_F(PrimOpTest, match5) { @@ -863,7 +880,8 @@ namespace nix { // ensure that the list is sorted const std::vector expected { "a", "x", "y", "z" }; - for (const auto [n, elem] : enumerate(v.listItems())) + auto listView = v.listView(); + for (const auto [n, elem] : enumerate(listView)) ASSERT_THAT(*elem, IsStringEq(expected[n])); } diff --git a/src/libexpr-tests/trivial.cc b/src/libexpr-tests/trivial.cc index 50a8f29f8..6eabad6d7 100644 --- a/src/libexpr-tests/trivial.cc +++ b/src/libexpr-tests/trivial.cc @@ -143,7 +143,7 @@ namespace nix { // Usually Nix rejects duplicate keys in an attrset but it does allow // so if it is an attribute set that contains disjoint sets of keys. // The below is equivalent to `{a.b = 1; a.c = 2; }`. - // The attribute set `a` will be a Thunk at first as the attribuets + // The attribute set `a` will be a Thunk at first as the attributes // have to be merged (or otherwise computed) and that is done in a lazy // manner. diff --git a/src/libexpr/attr-path.cc b/src/libexpr/attr-path.cc index 722b57bbf..111d04cf2 100644 --- a/src/libexpr/attr-path.cc +++ b/src/libexpr/attr-path.cc @@ -95,7 +95,7 @@ std::pair findAlongAttrPath(EvalState & state, const std::strin if (*attrIndex >= v->listSize()) throw AttrPathNotFound("list index %1% in selection path '%2%' is out of range", *attrIndex, attrPath); - v = v->listElems()[*attrIndex]; + v = v->listView()[*attrIndex]; pos = noPos; } diff --git a/src/libexpr/eval-cache.cc b/src/libexpr/eval-cache.cc index 72a6b60ea..39c1b827d 100644 --- a/src/libexpr/eval-cache.cc +++ b/src/libexpr/eval-cache.cc @@ -724,7 +724,7 @@ std::vector AttrCursor::getListOfStrings() std::vector res; - for (auto & elem : v.listItems()) + for (auto elem : v.listView()) res.push_back(std::string(root->state.forceStringNoCtx(*elem, noPos, "while evaluating an attribute for caching"))); if (root->db) diff --git a/src/libexpr/eval-gc.cc b/src/libexpr/eval-gc.cc index bec668001..5a4ecf035 100644 --- a/src/libexpr/eval-gc.cc +++ b/src/libexpr/eval-gc.cc @@ -4,6 +4,7 @@ #include "nix/util/config-global.hh" #include "nix/util/serialise.hh" #include "nix/expr/eval-gc.hh" +#include "nix/expr/value.hh" #include "expr-config-private.hh" @@ -52,6 +53,13 @@ static inline void initGCReal() GC_INIT(); + /* Register valid displacements in case we are using alignment niches + for storing the type information. This way tagged pointers are considered + to be valid, even when they are not aligned. */ + if constexpr (detail::useBitPackedValueStorage) + for (std::size_t i = 1; i < sizeof(std::uintptr_t); ++i) + GC_register_displacement(i); + GC_set_oom_fn(oomHandler); /* Set the initial heap size to something fairly big (25% of diff --git a/src/libexpr/eval-profiler-settings.cc b/src/libexpr/eval-profiler-settings.cc new file mode 100644 index 000000000..1a35d4a2d --- /dev/null +++ b/src/libexpr/eval-profiler-settings.cc @@ -0,0 +1,49 @@ +#include "nix/expr/eval-profiler-settings.hh" +#include "nix/util/configuration.hh" +#include "nix/util/logging.hh" /* Needs to be included before config-impl.hh */ +#include "nix/util/config-impl.hh" +#include "nix/util/abstract-setting-to-json.hh" + +#include + +namespace nix { + +template<> +EvalProfilerMode BaseSetting::parse(const std::string & str) const +{ + if (str == "disabled") + return EvalProfilerMode::disabled; + else if (str == "flamegraph") + return EvalProfilerMode::flamegraph; + else + throw UsageError("option '%s' has invalid value '%s'", name, str); +} + +template<> +struct BaseSetting::trait +{ + static constexpr bool appendable = false; +}; + +template<> +std::string BaseSetting::to_string() const +{ + if (value == EvalProfilerMode::disabled) + return "disabled"; + else if (value == EvalProfilerMode::flamegraph) + return "flamegraph"; + else + unreachable(); +} + +NLOHMANN_JSON_SERIALIZE_ENUM( + EvalProfilerMode, + { + {EvalProfilerMode::disabled, "disabled"}, + {EvalProfilerMode::flamegraph, "flamegraph"}, + }); + +/* Explicit instantiation of templates */ +template class BaseSetting; + +} diff --git a/src/libexpr/eval-profiler.cc b/src/libexpr/eval-profiler.cc new file mode 100644 index 000000000..b65bc3a4d --- /dev/null +++ b/src/libexpr/eval-profiler.cc @@ -0,0 +1,355 @@ +#include "nix/expr/eval-profiler.hh" +#include "nix/expr/nixexpr.hh" +#include "nix/expr/eval.hh" +#include "nix/util/lru-cache.hh" + +namespace nix { + +void EvalProfiler::preFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) {} + +void EvalProfiler::postFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) +{ +} + +void MultiEvalProfiler::preFunctionCallHook( + EvalState & state, const Value & v, std::span args, const PosIdx pos) +{ + for (auto & profiler : profilers) { + if (profiler->getNeededHooks().test(Hook::preFunctionCall)) + profiler->preFunctionCallHook(state, v, args, pos); + } +} + +void MultiEvalProfiler::postFunctionCallHook( + EvalState & state, const Value & v, std::span args, const PosIdx pos) +{ + for (auto & profiler : profilers) { + if (profiler->getNeededHooks().test(Hook::postFunctionCall)) + profiler->postFunctionCallHook(state, v, args, pos); + } +} + +EvalProfiler::Hooks MultiEvalProfiler::getNeededHooksImpl() const +{ + Hooks hooks; + for (auto & p : profilers) + hooks |= p->getNeededHooks(); + return hooks; +} + +void MultiEvalProfiler::addProfiler(ref profiler) +{ + profilers.push_back(profiler); + invalidateNeededHooks(); +} + +namespace { + +class PosCache : private LRUCache +{ + const EvalState & state; + +public: + PosCache(const EvalState & state) + : LRUCache(524288) /* ~40MiB */ + , state(state) + { + } + + Pos lookup(PosIdx posIdx) + { + auto posOrNone = LRUCache::get(posIdx); + if (posOrNone) + return *posOrNone; + + auto pos = state.positions[posIdx]; + upsert(posIdx, pos); + return pos; + } +}; + +struct LambdaFrameInfo +{ + ExprLambda * expr; + /** Position where the lambda has been called. */ + PosIdx callPos = noPos; + std::ostream & symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const; + auto operator<=>(const LambdaFrameInfo & rhs) const = default; +}; + +/** Primop call. */ +struct PrimOpFrameInfo +{ + const PrimOp * expr; + /** Position where the primop has been called. */ + PosIdx callPos = noPos; + std::ostream & symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const; + auto operator<=>(const PrimOpFrameInfo & rhs) const = default; +}; + +/** Used for functor calls (attrset with __functor attr). */ +struct FunctorFrameInfo +{ + PosIdx pos; + std::ostream & symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const; + auto operator<=>(const FunctorFrameInfo & rhs) const = default; +}; + +struct DerivationStrictFrameInfo +{ + PosIdx callPos = noPos; + std::string drvName; + std::ostream & symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const; + auto operator<=>(const DerivationStrictFrameInfo & rhs) const = default; +}; + +/** Fallback frame info. */ +struct GenericFrameInfo +{ + PosIdx pos; + std::ostream & symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const; + auto operator<=>(const GenericFrameInfo & rhs) const = default; +}; + +using FrameInfo = + std::variant; +using FrameStack = std::vector; + +/** + * Stack sampling profiler. + */ +class SampleStack : public EvalProfiler +{ + /* How often stack profiles should be flushed to file. This avoids the need + to persist stack samples across the whole evaluation at the cost + of periodically flushing data to disk. */ + static constexpr std::chrono::microseconds profileDumpInterval = std::chrono::milliseconds(2000); + + Hooks getNeededHooksImpl() const override + { + return Hooks().set(preFunctionCall).set(postFunctionCall); + } + + FrameInfo getPrimOpFrameInfo(const PrimOp & primOp, std::span args, PosIdx pos); + +public: + SampleStack(EvalState & state, std::filesystem::path profileFile, std::chrono::nanoseconds period) + : state(state) + , sampleInterval(period) + , profileFd([&]() { + AutoCloseFD fd = toDescriptor(open(profileFile.string().c_str(), O_WRONLY | O_CREAT | O_TRUNC, 0660)); + if (!fd) + throw SysError("opening file %s", profileFile); + return fd; + }()) + , posCache(state) + { + } + + [[gnu::noinline]] void + preFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) override; + [[gnu::noinline]] void + postFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) override; + + void maybeSaveProfile(std::chrono::time_point now); + void saveProfile(); + FrameInfo getFrameInfoFromValueAndPos(const Value & v, std::span args, PosIdx pos); + + SampleStack(SampleStack &&) = default; + SampleStack & operator=(SampleStack &&) = delete; + SampleStack(const SampleStack &) = delete; + SampleStack & operator=(const SampleStack &) = delete; + ~SampleStack(); +private: + /** Hold on to an instance of EvalState for symbolizing positions. */ + EvalState & state; + std::chrono::nanoseconds sampleInterval; + AutoCloseFD profileFd; + FrameStack stack; + std::map callCount; + std::chrono::time_point lastStackSample = + std::chrono::high_resolution_clock::now(); + std::chrono::time_point lastDump = std::chrono::high_resolution_clock::now(); + PosCache posCache; +}; + +FrameInfo SampleStack::getPrimOpFrameInfo(const PrimOp & primOp, std::span args, PosIdx pos) +{ + auto derivationInfo = [&]() -> std::optional { + /* Here we rely a bit on the implementation details of libexpr/primops/derivation.nix + and derivationStrict primop. This is not ideal, but is necessary for + the usefulness of the profiler. This might actually affect the evaluation, + but the cost shouldn't be that high as to make the traces entirely inaccurate. */ + if (primOp.name == "derivationStrict") { + try { + /* Error context strings don't actually matter, since we ignore all eval errors. */ + state.forceAttrs(*args[0], pos, ""); + auto attrs = args[0]->attrs(); + auto nameAttr = state.getAttr(state.sName, attrs, ""); + auto drvName = std::string(state.forceStringNoCtx(*nameAttr->value, pos, "")); + return DerivationStrictFrameInfo{.callPos = pos, .drvName = std::move(drvName)}; + } catch (...) { + /* Ignore all errors, since those will be diagnosed by the evaluator itself. */ + } + } + + return std::nullopt; + }(); + + return derivationInfo.value_or(PrimOpFrameInfo{.expr = &primOp, .callPos = pos}); +} + +FrameInfo SampleStack::getFrameInfoFromValueAndPos(const Value & v, std::span args, PosIdx pos) +{ + /* NOTE: No actual references to garbage collected values are not held in + the profiler. */ + if (v.isLambda()) + return LambdaFrameInfo{.expr = v.lambda().fun, .callPos = pos}; + else if (v.isPrimOp()) { + return getPrimOpFrameInfo(*v.primOp(), args, pos); + } else if (v.isPrimOpApp()) + /* Resolve primOp eagerly. Must not hold on to a reference to a Value. */ + return PrimOpFrameInfo{.expr = v.primOpAppPrimOp(), .callPos = pos}; + else if (state.isFunctor(v)) { + const auto functor = v.attrs()->get(state.sFunctor); + if (auto pos_ = posCache.lookup(pos); std::holds_alternative(pos_.origin)) + /* HACK: In case callsite position is unresolved. */ + return FunctorFrameInfo{.pos = functor->pos}; + return FunctorFrameInfo{.pos = pos}; + } else + /* NOTE: Add a stack frame even for invalid cases (e.g. when calling a non-function). This is what + * trace-function-calls does. */ + return GenericFrameInfo{.pos = pos}; +} + +[[gnu::noinline]] void +SampleStack::preFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) +{ + stack.push_back(getFrameInfoFromValueAndPos(v, args, pos)); + + auto now = std::chrono::high_resolution_clock::now(); + + if (now - lastStackSample > sampleInterval) { + callCount[stack] += 1; + lastStackSample = now; + } + + /* Do this in preFunctionCallHook because we might throw an exception, but + callFunction uses Finally, which doesn't play well with exceptions. */ + maybeSaveProfile(now); +} + +[[gnu::noinline]] void +SampleStack::postFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) +{ + if (!stack.empty()) + stack.pop_back(); +} + +std::ostream & LambdaFrameInfo::symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const +{ + if (auto pos = posCache.lookup(callPos); std::holds_alternative(pos.origin)) + /* HACK: To avoid dubious «none»:0 in the generated profile if the origin can't be resolved + resort to printing the lambda location instead of the callsite position. */ + os << posCache.lookup(expr->getPos()); + else + os << pos; + if (expr->name) + os << ":" << state.symbols[expr->name]; + return os; +} + +std::ostream & GenericFrameInfo::symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const +{ + os << posCache.lookup(pos); + return os; +} + +std::ostream & FunctorFrameInfo::symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const +{ + os << posCache.lookup(pos) << ":functor"; + return os; +} + +std::ostream & PrimOpFrameInfo::symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const +{ + /* Sometimes callsite position can have an unresolved origin, which + leads to confusing «none»:0 locations in the profile. */ + auto pos = posCache.lookup(callPos); + if (!std::holds_alternative(pos.origin)) + os << posCache.lookup(callPos) << ":"; + os << *expr; + return os; +} + +std::ostream & +DerivationStrictFrameInfo::symbolize(const EvalState & state, std::ostream & os, PosCache & posCache) const +{ + /* Sometimes callsite position can have an unresolved origin, which + leads to confusing «none»:0 locations in the profile. */ + auto pos = posCache.lookup(callPos); + if (!std::holds_alternative(pos.origin)) + os << posCache.lookup(callPos) << ":"; + os << "primop derivationStrict:" << drvName; + return os; +} + +void SampleStack::maybeSaveProfile(std::chrono::time_point now) +{ + if (now - lastDump >= profileDumpInterval) + saveProfile(); + else + return; + + /* Save the last dump timepoint. Do this after actually saving data to file + to not account for the time doing the flushing to disk. */ + lastDump = std::chrono::high_resolution_clock::now(); + + /* Free up memory used for stack sampling. This might be very significant for + long-running evaluations, so we shouldn't hog too much memory. */ + callCount.clear(); +} + +void SampleStack::saveProfile() +{ + auto os = std::ostringstream{}; + for (auto & [stack, count] : callCount) { + auto first = true; + for (auto & pos : stack) { + if (first) + first = false; + else + os << ";"; + + std::visit([&](auto && info) { info.symbolize(state, os, posCache); }, pos); + } + os << " " << count; + writeLine(profileFd.get(), std::move(os).str()); + /* Clear ostringstream. */ + os.str(""); + os.clear(); + } +} + +SampleStack::~SampleStack() +{ + /* Guard against cases when we are already unwinding the stack. */ + try { + saveProfile(); + } catch (...) { + ignoreExceptionInDestructor(); + } +} + +} // namespace + +ref makeSampleStackProfiler(EvalState & state, std::filesystem::path profileFile, uint64_t frequency) +{ + /* 0 is a special value for sampling stack after each call. */ + std::chrono::nanoseconds period = frequency == 0 + ? std::chrono::nanoseconds{0} + : std::chrono::nanoseconds{std::nano::den / frequency / std::nano::num}; + return make_ref(state, profileFile, period); +} + +} diff --git a/src/libexpr/eval.cc b/src/libexpr/eval.cc index fcc935add..054b51564 100644 --- a/src/libexpr/eval.cc +++ b/src/libexpr/eval.cc @@ -2,6 +2,7 @@ #include "nix/expr/eval-settings.hh" #include "nix/expr/primops.hh" #include "nix/expr/print-options.hh" +#include "nix/expr/symbol-table.hh" #include "nix/util/exit.hh" #include "nix/util/types.hh" #include "nix/util/util.hh" @@ -90,20 +91,16 @@ std::string printValue(EvalState & state, Value & v) return out.str(); } +Value * Value::toPtr(SymbolStr str) noexcept +{ + return const_cast(str.valuePtr()); +} + void Value::print(EvalState & state, std::ostream & str, PrintOptions options) { printValue(state, str, *this, options); } -const Value * getPrimOp(const Value &v) { - const Value * primOp = &v; - while (primOp->isPrimOpApp()) { - primOp = primOp->payload.primOpApp.left; - } - assert(primOp->isPrimOp()); - return primOp; -} - std::string_view showType(ValueType type, bool withArticle) { #define WA(a, w) withArticle ? a " " w : w @@ -129,12 +126,12 @@ std::string showType(const Value & v) // Allow selecting a subset of enum values #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wswitch-enum" - switch (v.internalType) { - case tString: return v.payload.string.context ? "a string with context" : "a string"; + switch (v.getInternalType()) { + case tString: return v.context() ? "a string with context" : "a string"; case tPrimOp: - return fmt("the built-in function '%s'", std::string(v.payload.primOp->name)); + return fmt("the built-in function '%s'", std::string(v.primOp()->name)); case tPrimOpApp: - return fmt("the partially applied built-in function '%s'", std::string(getPrimOp(v)->payload.primOp->name)); + return fmt("the partially applied built-in function '%s'", v.primOpAppPrimOp()->name); case tExternal: return v.external()->showType(); case tThunk: return v.isBlackhole() ? "a black hole" : "a thunk"; case tApp: return "a function application"; @@ -149,12 +146,10 @@ PosIdx Value::determinePos(const PosIdx pos) const // Allow selecting a subset of enum values #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wswitch-enum" - if (this->pos != 0) - return PosIdx(this->pos); - switch (internalType) { + switch (getInternalType()) { case tAttrs: return attrs()->pos; - case tLambda: return payload.lambda.fun->pos; - case tApp: return payload.app.left->determinePos(pos); + case tLambda: return lambda().fun->pos; + case tApp: return app().left->determinePos(pos); default: return pos; } #pragma GCC diagnostic pop @@ -163,13 +158,12 @@ PosIdx Value::determinePos(const PosIdx pos) const bool Value::isTrivial() const { return - internalType != tApp - && internalType != tPrimOpApp - && (internalType != tThunk - || (dynamic_cast(payload.thunk.expr) - && ((ExprAttrs *) payload.thunk.expr)->dynamicAttrs.empty()) - || dynamic_cast(payload.thunk.expr) - || dynamic_cast(payload.thunk.expr)); + !isa() + && (!isa() + || (dynamic_cast(thunk().expr) + && ((ExprAttrs *) thunk().expr)->dynamicAttrs.empty()) + || dynamic_cast(thunk().expr) + || dynamic_cast(thunk().expr)); } @@ -215,6 +209,7 @@ EvalState::EvalState( , sRight(symbols.create("right")) , sWrong(symbols.create("wrong")) , sStructuredAttrs(symbols.create("__structuredAttrs")) + , sJson(symbols.create("__json")) , sAllowedReferences(symbols.create("allowedReferences")) , sAllowedRequisites(symbols.create("allowedRequisites")) , sDisallowedReferences(symbols.create("disallowedReferences")) @@ -372,8 +367,20 @@ EvalState::EvalState( ); createBaseEnv(settings); -} + /* Register function call tracer. */ + if (settings.traceFunctionCalls) + profiler.addProfiler(make_ref()); + + switch (settings.evalProfilerMode) { + case EvalProfilerMode::flamegraph: + profiler.addProfiler(makeSampleStackProfiler( + *this, settings.evalProfileFile.get(), settings.evalProfilerFrequency)); + break; + case EvalProfilerMode::disabled: + break; + } +} EvalState::~EvalState() { @@ -493,7 +500,7 @@ void EvalState::addConstant(const std::string & name, Value * v, Constant info) /* Install value the base environment. */ staticBaseEnv->vars.emplace_back(symbols.create(name), baseEnvDispl); baseEnv.values[baseEnvDispl++] = v; - getBuiltins().payload.attrs->push_back(Attr(symbols.create(name2), v)); + const_cast(getBuiltins().attrs())->push_back(Attr(symbols.create(name2), v)); } } @@ -515,13 +522,15 @@ std::ostream & operator<<(std::ostream & output, const PrimOp & primOp) const PrimOp * Value::primOpAppPrimOp() const { - Value * left = payload.primOpApp.left; + Value * left = primOpApp().left; while (left && !left->isPrimOp()) { - left = left->payload.primOpApp.left; + left = left->primOpApp().left; } if (!left) return nullptr; + + assert(left->isPrimOp()); return left->primOp(); } @@ -529,7 +538,7 @@ const PrimOp * Value::primOpAppPrimOp() const void Value::mkPrimOp(PrimOp * p) { p->check(); - finishValue(tPrimOp, { .primOp = p }); + setStorage(p); } @@ -561,7 +570,7 @@ Value * EvalState::addPrimOp(PrimOp && primOp) else { staticBaseEnv->vars.emplace_back(envName, baseEnvDispl); baseEnv.values[baseEnvDispl++] = v; - getBuiltins().payload.attrs->push_back(Attr(symbols.create(primOp.name), v)); + const_cast(getBuiltins().attrs())->push_back(Attr(symbols.create(primOp.name), v)); } return v; @@ -598,7 +607,7 @@ std::optional EvalState::getDoc(Value & v) }; } if (v.isLambda()) { - auto exprLambda = v.payload.lambda.fun; + auto exprLambda = v.lambda().fun; std::ostringstream s; std::string name; @@ -645,7 +654,7 @@ std::optional EvalState::getDoc(Value & v) Value & functor = *v.attrs()->find(sFunctor)->value; Value * vp[] = {&v}; Value partiallyApplied; - // The first paramater is not user-provided, and may be + // The first parameter is not user-provided, and may be // handled by code that is opaque to the user, like lib.const = x: y: y; // So preferably we show docs that are relevant to the // "partially applied" function returned by e.g. `const`. @@ -908,7 +917,7 @@ void Value::mkStringMove(const char * s, const NixStringContext & context) void Value::mkPath(const SourcePath & path) { - mkPath(&*path.accessor, makeImmutableString(path.path.abs()), noPos.get()); + mkPath(&*path.accessor, makeImmutableString(path.path.abs())); } @@ -1535,9 +1544,14 @@ void EvalState::callFunction(Value & fun, std::span args, Value & vRes, { auto _level = addCallDepth(pos); - auto trace = settings.traceFunctionCalls - ? std::make_unique(positions[pos]) - : nullptr; + auto neededHooks = profiler.getNeededHooks(); + if (neededHooks.test(EvalProfiler::preFunctionCall)) [[unlikely]] + profiler.preFunctionCallHook(*this, fun, args, pos); + + Finally traceExit_{[&](){ + if (profiler.getNeededHooks().test(EvalProfiler::postFunctionCall)) [[unlikely]] + profiler.postFunctionCallHook(*this, fun, args, pos); + }}; forceValue(fun, pos); @@ -1559,13 +1573,13 @@ void EvalState::callFunction(Value & fun, std::span args, Value & vRes, if (vCur.isLambda()) { - ExprLambda & lambda(*vCur.payload.lambda.fun); + ExprLambda & lambda(*vCur.lambda().fun); auto size = (!lambda.arg ? 0 : 1) + (lambda.hasFormals() ? lambda.formals->formals.size() : 0); Env & env2(allocEnv(size)); - env2.up = vCur.payload.lambda.env; + env2.up = vCur.lambda().env; Displacement displ = 0; @@ -1595,7 +1609,7 @@ void EvalState::callFunction(Value & fun, std::span args, Value & vRes, symbols[i.name]) .atPos(lambda.pos) .withTrace(pos, "from call site") - .withFrame(*fun.payload.lambda.env, lambda) + .withFrame(*fun.lambda().env, lambda) .debugThrow(); } env2.values[displ++] = i.def->maybeThunk(*this, env2); @@ -1622,7 +1636,7 @@ void EvalState::callFunction(Value & fun, std::span args, Value & vRes, .atPos(lambda.pos) .withTrace(pos, "from call site") .withSuggestions(suggestions) - .withFrame(*fun.payload.lambda.env, lambda) + .withFrame(*fun.lambda().env, lambda) .debugThrow(); } unreachable(); @@ -1694,7 +1708,7 @@ void EvalState::callFunction(Value & fun, std::span args, Value & vRes, Value * primOp = &vCur; while (primOp->isPrimOpApp()) { argsDone++; - primOp = primOp->payload.primOpApp.left; + primOp = primOp->primOpApp().left; } assert(primOp->isPrimOp()); auto arity = primOp->primOp()->arity; @@ -1710,8 +1724,8 @@ void EvalState::callFunction(Value & fun, std::span args, Value & vRes, Value * vArgs[maxPrimOpArity]; auto n = argsDone; - for (Value * arg = &vCur; arg->isPrimOpApp(); arg = arg->payload.primOpApp.left) - vArgs[--n] = arg->payload.primOpApp.right; + for (Value * arg = &vCur; arg->isPrimOpApp(); arg = arg->primOpApp().left) + vArgs[--n] = arg->primOpApp().right; for (size_t i = 0; i < argsLeft; ++i) vArgs[argsDone + i] = args[i]; @@ -1817,14 +1831,14 @@ void EvalState::autoCallFunction(const Bindings & args, Value & fun, Value & res } } - if (!fun.isLambda() || !fun.payload.lambda.fun->hasFormals()) { + if (!fun.isLambda() || !fun.lambda().fun->hasFormals()) { res = fun; return; } - auto attrs = buildBindings(std::max(static_cast(fun.payload.lambda.fun->formals->formals.size()), args.size())); + auto attrs = buildBindings(std::max(static_cast(fun.lambda().fun->formals->formals.size()), args.size())); - if (fun.payload.lambda.fun->formals->ellipsis) { + if (fun.lambda().fun->formals->ellipsis) { // If the formals have an ellipsis (eg the function accepts extra args) pass // all available automatic arguments (which includes arguments specified on // the command line via --arg/--argstr) @@ -1832,7 +1846,7 @@ void EvalState::autoCallFunction(const Bindings & args, Value & fun, Value & res attrs.insert(v); } else { // Otherwise, only pass the arguments that the function accepts - for (auto & i : fun.payload.lambda.fun->formals->formals) { + for (auto & i : fun.lambda().fun->formals->formals) { auto j = args.get(i.name); if (j) { attrs.insert(*j); @@ -1842,7 +1856,7 @@ Nix attempted to evaluate a function as a top level expression; in this case it must have its arguments supplied either by default values, or passed explicitly with '--arg' or '--argstr'. See https://nixos.org/manual/nix/stable/language/constructs.html#functions.)", symbols[i.name]) - .atPos(i.pos).withFrame(*fun.payload.lambda.env, *fun.payload.lambda.fun).debugThrow(); + .atPos(i.pos).withFrame(*fun.lambda().env, *fun.lambda().fun).debugThrow(); } } } @@ -2000,9 +2014,10 @@ void EvalState::concatLists(Value & v, size_t nrLists, Value * const * lists, co auto list = buildList(len); auto out = list.elems; for (size_t n = 0, pos = 0; n < nrLists; ++n) { - auto l = lists[n]->listSize(); + auto listView = lists[n]->listView(); + auto l = listView.size(); if (l) - memcpy(out + pos, lists[n]->listElems(), l * sizeof(Value *)); + memcpy(out + pos, listView.data(), l * sizeof(Value *)); pos += l; } v.mkList(list); @@ -2155,7 +2170,7 @@ void EvalState::forceValueDeep(Value & v) try { // If the value is a thunk, we're evaling. Otherwise no trace necessary. auto dts = debugRepl && i.value->isThunk() - ? makeDebugTraceStacker(*this, *i.value->payload.thunk.expr, *i.value->payload.thunk.env, i.pos, + ? makeDebugTraceStacker(*this, *i.value->thunk().expr, *i.value->thunk().env, i.pos, "while evaluating the attribute '%1%'", symbols[i.name]) : nullptr; @@ -2167,7 +2182,7 @@ void EvalState::forceValueDeep(Value & v) } else if (v.isList()) { - for (auto v2 : v.listItems()) + for (auto v2 : v.listView()) recurse(*v2); } }; @@ -2235,8 +2250,18 @@ bool EvalState::forceBool(Value & v, const PosIdx pos, std::string_view errorCtx return v.boolean(); } +Bindings::const_iterator EvalState::getAttr(Symbol attrSym, const Bindings * attrSet, std::string_view errorCtx) +{ + auto value = attrSet->find(attrSym); + if (value == attrSet->end()) { + error("attribute '%s' missing", symbols[attrSym]) + .withTrace(noPos, errorCtx) + .debugThrow(); + } + return value; +} -bool EvalState::isFunctor(Value & fun) +bool EvalState::isFunctor(const Value & fun) const { return fun.type() == nAttrs && fun.attrs()->find(sFunctor) != fun.attrs()->end(); } @@ -2279,8 +2304,8 @@ std::string_view EvalState::forceString(Value & v, const PosIdx pos, std::string void copyContext(const Value & v, NixStringContext & context, const ExperimentalFeatureSettings & xpSettings) { - if (v.payload.string.context) - for (const char * * p = v.payload.string.context; *p; ++p) + if (v.context()) + for (const char * * p = v.context(); *p; ++p) context.insert(NixStringContextElem::parse(*p, xpSettings)); } @@ -2356,7 +2381,7 @@ BackedStringView EvalState::coerceToString( !canonicalizePath && !copyToStore ? // FIXME: hack to preserve path literals that end in a // slash, as in /foo/${x}. - v.payload.path.path + v.pathStr() : copyToStore ? store->printStorePath(copyPathToStore(context, v.path(), v.determinePos(pos))) : ({ @@ -2409,7 +2434,8 @@ BackedStringView EvalState::coerceToString( if (v.isList()) { std::string result; - for (auto [n, v2] : enumerate(v.listItems())) { + auto listView = v.listView(); + for (auto [n, v2] : enumerate(listView)) { try { result += *coerceToString(pos, *v2, context, "while evaluating one element of the list", @@ -2447,6 +2473,7 @@ StorePath EvalState::copyPathToStore(NixStringContext & context, const SourcePat ? *dstPathCached : [&]() { auto dstPath = fetchToStore( + fetchSettings, *store, path.resolveSymlinks(SymlinkResolution::Ancestors), settings.readOnlyMode ? FetchMode::DryRun : FetchMode::Copy, @@ -2491,7 +2518,7 @@ SourcePath EvalState::coerceToPath(const PosIdx pos, Value & v, NixStringContext } } - /* Any other value should be coercable to a string, interpreted + /* Any other value should be coercible to a string, interpreted relative to the root filesystem. */ auto path = coerceToString(pos, v, context, errorCtx, false, false, true).toOwned(); if (path == "" || path[0] != '/') @@ -2637,14 +2664,14 @@ void EvalState::assertEqValues(Value & v1, Value & v2, const PosIdx pos, std::st return; case nPath: - if (v1.payload.path.accessor != v2.payload.path.accessor) { + if (v1.pathAccessor() != v2.pathAccessor()) { error( "path '%s' is not equal to path '%s' because their accessors are different", ValuePrinter(*this, v1, errorPrintOptions), ValuePrinter(*this, v2, errorPrintOptions)) .debugThrow(); } - if (strcmp(v1.payload.path.path, v2.payload.path.path) != 0) { + if (strcmp(v1.pathStr(), v2.pathStr()) != 0) { error( "path '%s' is not equal to path '%s'", ValuePrinter(*this, v1, errorPrintOptions), @@ -2668,7 +2695,7 @@ void EvalState::assertEqValues(Value & v1, Value & v2, const PosIdx pos, std::st } for (size_t n = 0; n < v1.listSize(); ++n) { try { - assertEqValues(*v1.listElems()[n], *v2.listElems()[n], pos, errorCtx); + assertEqValues(*v1.listView()[n], *v2.listView()[n], pos, errorCtx); } catch (Error & e) { e.addTrace(positions[pos], "while comparing list element %d", n); throw; @@ -2811,8 +2838,8 @@ bool EvalState::eqValues(Value & v1, Value & v2, const PosIdx pos, std::string_v case nPath: return // FIXME: compare accessors by their fingerprint. - v1.payload.path.accessor == v2.payload.path.accessor - && strcmp(v1.payload.path.path, v2.payload.path.path) == 0; + v1.pathAccessor() == v2.pathAccessor() + && strcmp(v1.pathStr(), v2.pathStr()) == 0; case nNull: return true; @@ -2820,7 +2847,7 @@ bool EvalState::eqValues(Value & v1, Value & v2, const PosIdx pos, std::string_v case nList: if (v1.listSize() != v2.listSize()) return false; for (size_t n = 0; n < v1.listSize(); ++n) - if (!eqValues(*v1.listElems()[n], *v2.listElems()[n], pos, errorCtx)) return false; + if (!eqValues(*v1.listView()[n], *v2.listView()[n], pos, errorCtx)) return false; return true; case nAttrs: { @@ -2867,7 +2894,7 @@ bool EvalState::fullGC() { GC_gcollect(); // Check that it ran. We might replace this with a version that uses more // of the boehm API to get this reliably, at a maintenance cost. - // We use a 1K margin because technically this has a race condtion, but we + // We use a 1K margin because technically this has a race condition, but we // probably won't encounter it in practice, because the CLI isn't concurrent // like that. return GC_get_bytes_since_gc() < 1024; @@ -3020,7 +3047,7 @@ void EvalState::printStatistics() // XXX: overrides earlier assignment topObj["symbols"] = json::array(); auto &list = topObj["symbols"]; - symbols.dump([&](const std::string & s) { list.emplace_back(s); }); + symbols.dump([&](std::string_view s) { list.emplace_back(s); }); } if (outPath == "-") { std::cerr << topObj.dump(2) << std::endl; @@ -3159,7 +3186,7 @@ std::optional EvalState::resolveLookupPathPath(const LookupPath::Pat store, fetchSettings, EvalSettings::resolvePseudoUrl(value)); - auto storePath = fetchToStore(*store, SourcePath(accessor), FetchMode::Copy); + auto storePath = fetchToStore(fetchSettings, *store, SourcePath(accessor), FetchMode::Copy); return finish(this->storePath(storePath)); } catch (Error & e) { logWarning({ diff --git a/src/libexpr/function-trace.cc b/src/libexpr/function-trace.cc index 1dce51726..cda3bc2db 100644 --- a/src/libexpr/function-trace.cc +++ b/src/libexpr/function-trace.cc @@ -3,16 +3,20 @@ namespace nix { -FunctionCallTrace::FunctionCallTrace(const Pos & pos) : pos(pos) { +void FunctionCallTrace::preFunctionCallHook( + EvalState & state, const Value & v, std::span args, const PosIdx pos) +{ auto duration = std::chrono::high_resolution_clock::now().time_since_epoch(); auto ns = std::chrono::duration_cast(duration); - printMsg(lvlInfo, "function-trace entered %1% at %2%", pos, ns.count()); + printMsg(lvlInfo, "function-trace entered %1% at %2%", state.positions[pos], ns.count()); } -FunctionCallTrace::~FunctionCallTrace() { +void FunctionCallTrace::postFunctionCallHook( + EvalState & state, const Value & v, std::span args, const PosIdx pos) +{ auto duration = std::chrono::high_resolution_clock::now().time_since_epoch(); auto ns = std::chrono::duration_cast(duration); - printMsg(lvlInfo, "function-trace exited %1% at %2%", pos, ns.count()); + printMsg(lvlInfo, "function-trace exited %1% at %2%", state.positions[pos], ns.count()); } } diff --git a/src/libexpr/get-drvs.cc b/src/libexpr/get-drvs.cc index f15ad4d73..3c9ff9ff3 100644 --- a/src/libexpr/get-drvs.cc +++ b/src/libexpr/get-drvs.cc @@ -117,7 +117,7 @@ PackageInfo::Outputs PackageInfo::queryOutputs(bool withPaths, bool onlyOutputsT state->forceList(*i->value, i->pos, "while evaluating the 'outputs' attribute of a derivation"); /* For each output... */ - for (auto elem : i->value->listItems()) { + for (auto elem : i->value->listView()) { std::string output(state->forceStringNoCtx(*elem, i->pos, "while evaluating the name of an output of a derivation")); if (withPaths) { @@ -159,7 +159,7 @@ PackageInfo::Outputs PackageInfo::queryOutputs(bool withPaths, bool onlyOutputsT /* ^ this shows during `nix-env -i` right under the bad derivation */ if (!outTI->isList()) throw errMsg; Outputs result; - for (auto elem : outTI->listItems()) { + for (auto elem : outTI->listView()) { if (elem->type() != nString) throw errMsg; auto out = outputs.find(elem->c_str()); if (out == outputs.end()) throw errMsg; @@ -206,7 +206,7 @@ bool PackageInfo::checkMeta(Value & v) { state->forceValue(v, v.determinePos(noPos)); if (v.type() == nList) { - for (auto elem : v.listItems()) + for (auto elem : v.listView()) if (!checkMeta(*elem)) return false; return true; } @@ -400,7 +400,8 @@ static void getDerivations(EvalState & state, Value & vIn, } else if (v.type() == nList) { - for (auto [n, elem] : enumerate(v.listItems())) { + auto listView = v.listView(); + for (auto [n, elem] : enumerate(listView)) { std::string pathPrefix2 = addToPath(pathPrefix, fmt("%d", n)); if (getDerivation(state, *elem, pathPrefix2, drvs, done, ignoreAssertionFailures)) getDerivations(state, *elem, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures); diff --git a/src/libexpr/include/nix/expr/eval-inline.hh b/src/libexpr/include/nix/expr/eval-inline.hh index 6e5759c0b..7d13d7cc7 100644 --- a/src/libexpr/include/nix/expr/eval-inline.hh +++ b/src/libexpr/include/nix/expr/eval-inline.hh @@ -89,9 +89,9 @@ Env & EvalState::allocEnv(size_t size) void EvalState::forceValue(Value & v, const PosIdx pos) { if (v.isThunk()) { - Env * env = v.payload.thunk.env; + Env * env = v.thunk().env; assert(env || v.isBlackhole()); - Expr * expr = v.payload.thunk.expr; + Expr * expr = v.thunk().expr; try { v.mkBlackhole(); //checkInterrupt(); @@ -106,7 +106,7 @@ void EvalState::forceValue(Value & v, const PosIdx pos) } } else if (v.isApp()) - callFunction(*v.payload.app.left, *v.payload.app.right, v, pos); + callFunction(*v.app().left, *v.app().right, v, pos); } diff --git a/src/libexpr/include/nix/expr/eval-profiler-settings.hh b/src/libexpr/include/nix/expr/eval-profiler-settings.hh new file mode 100644 index 000000000..a94cde042 --- /dev/null +++ b/src/libexpr/include/nix/expr/eval-profiler-settings.hh @@ -0,0 +1,16 @@ +#pragma once +///@file + +#include "nix/util/configuration.hh" + +namespace nix { + +enum struct EvalProfilerMode { disabled, flamegraph }; + +template<> +EvalProfilerMode BaseSetting::parse(const std::string & str) const; + +template<> +std::string BaseSetting::to_string() const; + +} diff --git a/src/libexpr/include/nix/expr/eval-profiler.hh b/src/libexpr/include/nix/expr/eval-profiler.hh new file mode 100644 index 000000000..21629eebc --- /dev/null +++ b/src/libexpr/include/nix/expr/eval-profiler.hh @@ -0,0 +1,114 @@ +#pragma once +/** + * @file + * + * Evaluation profiler interface definitions and builtin implementations. + */ + +#include "nix/util/ref.hh" + +#include +#include +#include +#include +#include + +namespace nix { + +class EvalState; +class PosIdx; +struct Value; + +class EvalProfiler +{ +public: + enum Hook { + preFunctionCall, + postFunctionCall, + }; + + static constexpr std::size_t numHooks = Hook::postFunctionCall + 1; + using Hooks = std::bitset; + +private: + std::optional neededHooks; + +protected: + /** Invalidate the cached neededHooks. */ + void invalidateNeededHooks() + { + neededHooks = std::nullopt; + } + + /** + * Get which hooks need to be called. + * + * This is the actual implementation which has to be defined by subclasses. + * Public API goes through the needsHooks, which is a + * non-virtual interface (NVI) which caches the return value. + */ + virtual Hooks getNeededHooksImpl() const + { + return Hooks{}; + } + +public: + /** + * Hook called in the EvalState::callFunction preamble. + * Gets called only if (getNeededHooks().test(Hook::preFunctionCall)) is true. + * + * @param state Evaluator state. + * @param v Function being invoked. + * @param args Function arguments. + * @param pos Function position. + */ + virtual void preFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos); + + /** + * Hook called on EvalState::callFunction exit. + * Gets called only if (getNeededHooks().test(Hook::postFunctionCall)) is true. + * + * @param state Evaluator state. + * @param v Function being invoked. + * @param args Function arguments. + * @param pos Function position. + */ + virtual void postFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos); + + virtual ~EvalProfiler() = default; + + /** + * Get which hooks need to be invoked for this EvalProfiler instance. + */ + Hooks getNeededHooks() + { + if (neededHooks.has_value()) + return *neededHooks; + return *(neededHooks = getNeededHooksImpl()); + } +}; + +/** + * Profiler that invokes multiple profilers at once. + */ +class MultiEvalProfiler : public EvalProfiler +{ + std::vector> profilers; + + [[gnu::noinline]] Hooks getNeededHooksImpl() const override; + +public: + MultiEvalProfiler() = default; + + /** Register a profiler instance. */ + void addProfiler(ref profiler); + + [[gnu::noinline]] void + preFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) override; + [[gnu::noinline]] void + postFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) override; +}; + +ref makeSampleStackProfiler(EvalState & state, std::filesystem::path profileFile, uint64_t frequency); + +} diff --git a/src/libexpr/include/nix/expr/eval-settings.hh b/src/libexpr/include/nix/expr/eval-settings.hh index 9b7573b20..7fa3f96be 100644 --- a/src/libexpr/include/nix/expr/eval-settings.hh +++ b/src/libexpr/include/nix/expr/eval-settings.hh @@ -1,6 +1,7 @@ #pragma once ///@file +#include "nix/expr/eval-profiler-settings.hh" #include "nix/util/configuration.hh" #include "nix/util/source-path.hh" @@ -12,7 +13,7 @@ struct PrimOp; struct EvalSettings : Config { /** - * Function used to interpet look path entries of a given scheme. + * Function used to interpret look path entries of a given scheme. * * The argument is the non-scheme part of the lookup path entry (see * `LookupPathHooks` below). @@ -203,6 +204,29 @@ struct EvalSettings : Config `flamegraph.pl`. )"}; + Setting evalProfilerMode{this, EvalProfilerMode::disabled, "eval-profiler", + R"( + Enables evaluation profiling. The following modes are supported: + + * `flamegraph` stack sampling profiler. Outputs folded format, one line per stack (suitable for `flamegraph.pl` and compatible tools). + + Use [`eval-profile-file`](#conf-eval-profile-file) to specify where the profile is saved. + + See [Using the `eval-profiler`](@docroot@/advanced-topics/eval-profiler.md). + )"}; + + Setting evalProfileFile{this, "nix.profile", "eval-profile-file", + R"( + Specifies the file where [evaluation profile](#conf-eval-profiler) is saved. + )"}; + + Setting evalProfilerFrequency{this, 99, "eval-profiler-frequency", + R"( + Specifies the sampling rate in hertz for sampling evaluation profilers. + Use `0` to sample the stack after each function call. + See [`eval-profiler`](#conf-eval-profiler). + )"}; + Setting useEvalCache{this, true, "eval-cache", R"( Whether to use the flake evaluation cache. @@ -212,7 +236,7 @@ struct EvalSettings : Config Setting ignoreExceptionsDuringTry{this, false, "ignore-try", R"( - If set to true, ignore exceptions inside 'tryEval' calls when evaluating nix expressions in + If set to true, ignore exceptions inside 'tryEval' calls when evaluating Nix expressions in debug mode (using the --debugger flag). By default, the debugger pauses on all exceptions. )"}; diff --git a/src/libexpr/include/nix/expr/eval.hh b/src/libexpr/include/nix/expr/eval.hh index 58f88a5a3..763ce184c 100644 --- a/src/libexpr/include/nix/expr/eval.hh +++ b/src/libexpr/include/nix/expr/eval.hh @@ -3,6 +3,7 @@ #include "nix/expr/attr-set.hh" #include "nix/expr/eval-error.hh" +#include "nix/expr/eval-profiler.hh" #include "nix/util/types.hh" #include "nix/expr/value.hh" #include "nix/expr/nixexpr.hh" @@ -214,7 +215,7 @@ public: const Symbol sWith, sOutPath, sDrvPath, sType, sMeta, sName, sValue, sSystem, sOverrides, sOutputs, sOutputName, sIgnoreNulls, sFile, sLine, sColumn, sFunctor, sToString, - sRight, sWrong, sStructuredAttrs, + sRight, sWrong, sStructuredAttrs, sJson, sAllowedReferences, sAllowedRequisites, sDisallowedReferences, sDisallowedRequisites, sMaxSize, sMaxClosureSize, sBuilder, sArgs, @@ -552,6 +553,11 @@ public: std::string_view forceString(Value & v, NixStringContext & context, const PosIdx pos, std::string_view errorCtx, const ExperimentalFeatureSettings & xpSettings = experimentalFeatureSettings); std::string_view forceStringNoCtx(Value & v, const PosIdx pos, std::string_view errorCtx); + /** + * Get attribute from an attribute set and throw an error if it doesn't exist. + */ + Bindings::const_iterator getAttr(Symbol attrSym, const Bindings * attrSet, std::string_view errorCtx); + template [[gnu::noinline]] void addErrorTrace(Error & e, const Args & ... formatArgs) const; @@ -766,7 +772,7 @@ public: */ void assertEqValues(Value & v1, Value & v2, const PosIdx pos, std::string_view errorCtx); - bool isFunctor(Value & fun); + bool isFunctor(const Value & fun) const; void callFunction(Value & fun, std::span args, Value & vRes, const PosIdx pos); @@ -939,6 +945,9 @@ private: typedef std::map FunctionCalls; FunctionCalls functionCalls; + /** Evaluation/call profiler. */ + MultiEvalProfiler profiler; + void incrFunctionCall(ExprLambda * fun); typedef std::map AttrSelects; diff --git a/src/libexpr/include/nix/expr/function-trace.hh b/src/libexpr/include/nix/expr/function-trace.hh index dc92d4b5c..ed1fc6452 100644 --- a/src/libexpr/include/nix/expr/function-trace.hh +++ b/src/libexpr/include/nix/expr/function-trace.hh @@ -2,15 +2,24 @@ ///@file #include "nix/expr/eval.hh" - -#include +#include "nix/expr/eval-profiler.hh" namespace nix { -struct FunctionCallTrace +class FunctionCallTrace : public EvalProfiler { - const Pos pos; - FunctionCallTrace(const Pos & pos); - ~FunctionCallTrace(); + Hooks getNeededHooksImpl() const override + { + return Hooks().set(preFunctionCall).set(postFunctionCall); + } + +public: + FunctionCallTrace() = default; + + [[gnu::noinline]] void + preFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) override; + [[gnu::noinline]] void + postFunctionCallHook(EvalState & state, const Value & v, std::span args, const PosIdx pos) override; }; + } diff --git a/src/libexpr/include/nix/expr/meson.build b/src/libexpr/include/nix/expr/meson.build index 50ea8f3c2..333490ee4 100644 --- a/src/libexpr/include/nix/expr/meson.build +++ b/src/libexpr/include/nix/expr/meson.build @@ -14,6 +14,8 @@ headers = [config_pub_h] + files( 'eval-error.hh', 'eval-gc.hh', 'eval-inline.hh', + 'eval-profiler-settings.hh', + 'eval-profiler.hh', 'eval-settings.hh', 'eval.hh', 'function-trace.hh', diff --git a/src/libexpr/include/nix/expr/nixexpr.hh b/src/libexpr/include/nix/expr/nixexpr.hh index 090681470..6ede91948 100644 --- a/src/libexpr/include/nix/expr/nixexpr.hh +++ b/src/libexpr/include/nix/expr/nixexpr.hh @@ -138,9 +138,9 @@ struct ExprPath : Expr ref accessor; std::string s; Value v; - ExprPath(ref accessor, std::string s, PosIdx pos) : accessor(accessor), s(std::move(s)) + ExprPath(ref accessor, std::string s) : accessor(accessor), s(std::move(s)) { - v.mkPath(&*accessor, this->s.c_str(), pos.get()); + v.mkPath(&*accessor, this->s.c_str()); } Value * maybeThunk(EvalState & state, Env & env) override; COMMON_METHODS @@ -306,6 +306,9 @@ struct Formal struct Formals { typedef std::vector Formals_; + /** + * @pre Sorted according to predicate (std::tie(a.name, a.pos) < std::tie(b.name, b.pos)). + */ Formals_ formals; bool ellipsis; diff --git a/src/libexpr/include/nix/expr/print-ambiguous.hh b/src/libexpr/include/nix/expr/print-ambiguous.hh index 1dafd5d56..d4ecea0bf 100644 --- a/src/libexpr/include/nix/expr/print-ambiguous.hh +++ b/src/libexpr/include/nix/expr/print-ambiguous.hh @@ -1,6 +1,7 @@ #pragma once #include "nix/expr/value.hh" +#include "nix/expr/symbol-table.hh" namespace nix { diff --git a/src/libexpr/include/nix/expr/symbol-table.hh b/src/libexpr/include/nix/expr/symbol-table.hh index c04cc041b..20a05a09d 100644 --- a/src/libexpr/include/nix/expr/symbol-table.hh +++ b/src/libexpr/include/nix/expr/symbol-table.hh @@ -1,51 +1,35 @@ #pragma once ///@file -#include -#include -#include - -#include "nix/util/types.hh" +#include +#include "nix/expr/value.hh" #include "nix/util/chunked-vector.hh" #include "nix/util/error.hh" +#include +#define USE_FLAT_SYMBOL_SET (BOOST_VERSION >= 108100) +#if USE_FLAT_SYMBOL_SET +# include +#else +# include +#endif + namespace nix { -/** - * This class mainly exists to give us an operator<< for ostreams. We could also - * return plain strings from SymbolTable, but then we'd have to wrap every - * instance of a symbol that is fmt()ed, which is inconvenient and error-prone. - */ -class SymbolStr +class SymbolValue : protected Value { + friend class SymbolStr; friend class SymbolTable; -private: - const std::string * s; + uint32_t size_; + uint32_t idx; - explicit SymbolStr(const std::string & symbol): s(&symbol) {} + SymbolValue() = default; public: - bool operator == (std::string_view s2) const + operator std::string_view() const noexcept { - return *s == s2; - } - - const char * c_str() const - { - return s->c_str(); - } - - operator const std::string_view () const - { - return *s; - } - - friend std::ostream & operator <<(std::ostream & os, const SymbolStr & symbol); - - bool empty() const - { - return s->empty(); + return {c_str(), size_}; } }; @@ -56,24 +40,161 @@ public: */ class Symbol { + friend class SymbolStr; friend class SymbolTable; private: uint32_t id; - explicit Symbol(uint32_t id): id(id) {} + explicit Symbol(uint32_t id) noexcept : id(id) {} public: - Symbol() : id(0) {} + Symbol() noexcept : id(0) {} - explicit operator bool() const { return id > 0; } + [[gnu::always_inline]] + explicit operator bool() const noexcept { return id > 0; } - auto operator<=>(const Symbol other) const { return id <=> other.id; } - bool operator==(const Symbol other) const { return id == other.id; } + auto operator<=>(const Symbol other) const noexcept { return id <=> other.id; } + bool operator==(const Symbol other) const noexcept { return id == other.id; } friend class std::hash; }; +/** + * This class mainly exists to give us an operator<< for ostreams. We could also + * return plain strings from SymbolTable, but then we'd have to wrap every + * instance of a symbol that is fmt()ed, which is inconvenient and error-prone. + */ +class SymbolStr +{ + friend class SymbolTable; + + constexpr static size_t chunkSize{8192}; + using SymbolValueStore = ChunkedVector; + + const SymbolValue * s; + + struct Key + { + using HashType = boost::hash; + + SymbolValueStore & store; + std::string_view s; + std::size_t hash; + std::pmr::polymorphic_allocator & alloc; + + Key(SymbolValueStore & store, std::string_view s, std::pmr::polymorphic_allocator & stringAlloc) + : store(store) + , s(s) + , hash(HashType{}(s)) + , alloc(stringAlloc) {} + }; + +public: + SymbolStr(const SymbolValue & s) noexcept : s(&s) {} + + SymbolStr(const Key & key) + { + auto size = key.s.size(); + if (size >= std::numeric_limits::max()) { + throw Error("Size of symbol exceeds 4GiB and cannot be stored"); + } + // for multi-threaded implementations: lock store and allocator here + const auto & [v, idx] = key.store.add(SymbolValue{}); + if (size == 0) { + v.mkString("", nullptr); + } else { + auto s = key.alloc.allocate(size + 1); + memcpy(s, key.s.data(), size); + s[size] = '\0'; + v.mkString(s, nullptr); + } + v.size_ = size; + v.idx = idx; + this->s = &v; + } + + bool operator == (std::string_view s2) const noexcept + { + return *s == s2; + } + + [[gnu::always_inline]] + const char * c_str() const noexcept + { + return s->c_str(); + } + + [[gnu::always_inline]] + operator std::string_view () const noexcept + { + return *s; + } + + friend std::ostream & operator <<(std::ostream & os, const SymbolStr & symbol); + + [[gnu::always_inline]] + bool empty() const noexcept + { + return s->size_ == 0; + } + + [[gnu::always_inline]] + size_t size() const noexcept + { + return s->size_; + } + + [[gnu::always_inline]] + const Value * valuePtr() const noexcept + { + return s; + } + + explicit operator Symbol() const noexcept + { + return Symbol{s->idx + 1}; + } + + struct Hash + { + using is_transparent = void; + using is_avalanching = std::true_type; + + std::size_t operator()(SymbolStr str) const + { + return Key::HashType{}(*str.s); + } + + std::size_t operator()(const Key & key) const noexcept + { + return key.hash; + } + }; + + struct Equal + { + using is_transparent = void; + + bool operator()(SymbolStr a, SymbolStr b) const noexcept + { + // strings are unique, so that a pointer comparison is OK + return a.s == b.s; + } + + bool operator()(SymbolStr a, const Key & b) const noexcept + { + return a == b.s; + } + + [[gnu::always_inline]] + bool operator()(const Key & a, SymbolStr b) const noexcept + { + return operator()(b, a); + } + }; +}; + /** * Symbol table used by the parser and evaluator to represent and look * up identifiers and attributes efficiently. @@ -82,29 +203,46 @@ class SymbolTable { private: /** - * Map from string view (backed by ChunkedVector) -> offset into the store. + * SymbolTable is an append only data structure. + * During its lifetime the monotonic buffer holds all strings and nodes, if the symbol set is node based. + */ + std::pmr::monotonic_buffer_resource buffer; + std::pmr::polymorphic_allocator stringAlloc{&buffer}; + SymbolStr::SymbolValueStore store{16}; + + /** + * Transparent lookup of string view for a pointer to a ChunkedVector entry -> return offset into the store. * ChunkedVector references are never invalidated. */ - std::unordered_map symbols; - ChunkedVector store{16}; +#if USE_FLAT_SYMBOL_SET + boost::unordered_flat_set symbols{SymbolStr::chunkSize}; +#else + using SymbolValueAlloc = std::pmr::polymorphic_allocator; + boost::unordered_set symbols{SymbolStr::chunkSize, {&buffer}}; +#endif public: /** * Converts a string into a symbol. */ - Symbol create(std::string_view s) - { + Symbol create(std::string_view s) { // Most symbols are looked up more than once, so we trade off insertion performance // for lookup performance. // FIXME: make this thread-safe. - auto it = symbols.find(s); - if (it != symbols.end()) - return Symbol(it->second + 1); + return [&](T && key) -> Symbol { + if constexpr (requires { symbols.insert(key); }) { + auto [it, _] = symbols.insert(key); + return Symbol(*it); + } else { + auto it = symbols.find(key); + if (it != symbols.end()) + return Symbol(*it); - const auto & [rawSym, idx] = store.add(s); - symbols.emplace(rawSym, idx); - return Symbol(idx + 1); + it = symbols.emplace(key).first; + return Symbol(*it); + } + }(SymbolStr::Key{store, s, stringAlloc}); } std::vector resolve(const std::vector & symbols) const @@ -118,12 +256,14 @@ public: SymbolStr operator[](Symbol s) const { - if (s.id == 0 || s.id > store.size()) + uint32_t idx = s.id - uint32_t(1); + if (idx >= store.size()) unreachable(); - return SymbolStr(store[s.id - 1]); + return store[idx]; } - size_t size() const + [[gnu::always_inline]] + size_t size() const noexcept { return store.size(); } @@ -147,3 +287,5 @@ struct std::hash return std::hash{}(s.id); } }; + +#undef USE_FLAT_SYMBOL_SET diff --git a/src/libexpr/include/nix/expr/value.hh b/src/libexpr/include/nix/expr/value.hh index 6fe9b6b6f..098effa29 100644 --- a/src/libexpr/include/nix/expr/value.hh +++ b/src/libexpr/include/nix/expr/value.hh @@ -3,9 +3,10 @@ #include #include +#include +#include #include "nix/expr/eval-gc.hh" -#include "nix/expr/symbol-table.hh" #include "nix/expr/value/context.hh" #include "nix/util/source-path.hh" #include "nix/expr/print-options.hh" @@ -18,25 +19,35 @@ namespace nix { struct Value; class BindingsBuilder; - +/** + * Internal type discriminator, which is more detailed than `ValueType`, as + * it specifies the exact representation used (for types that have multiple + * possible representations). + * + * @warning The ordering is very significant. See ValueStorage::getInternalType() for details + * about how this is mapped into the alignment bits to save significant memory. + * This also restricts the number of internal types represented with distinct memory layouts. + */ typedef enum { tUninitialized = 0, + /* layout: Single/zero field payload */ tInt = 1, tBool, + tNull, + tFloat, + tExternal, + tPrimOp, + tAttrs, + /* layout: Pair of pointers payload */ + tListSmall, + tPrimOpApp, + tApp, + tThunk, + tLambda, + /* layout: Single untaggable field */ + tListN, tString, tPath, - tNull, - tAttrs, - tList1, - tList2, - tListN, - tThunk, - tApp, - tLambda, - tPrimOp, - tPrimOpApp, - tExternal, - tFloat } InternalType; /** @@ -55,7 +66,7 @@ typedef enum { nAttrs, nList, nFunction, - nExternal + nExternal, } ValueType; class Bindings; @@ -65,6 +76,7 @@ struct ExprLambda; struct ExprBlackHole; struct PrimOp; class Symbol; +class SymbolStr; class PosIdx; struct Pos; class StorePath; @@ -81,15 +93,15 @@ using NixFloat = double; */ class ExternalValueBase { - friend std::ostream & operator << (std::ostream & str, const ExternalValueBase & v); + friend std::ostream & operator<<(std::ostream & str, const ExternalValueBase & v); friend class Printer; - protected: +protected: /** * Print out the value */ virtual std::ostream & print(std::ostream & str) const = 0; - public: +public: /** * Return a simple string describing the type */ @@ -104,41 +116,44 @@ class ExternalValueBase * Coerce the value to a string. Defaults to uncoercable, i.e. throws an * error. */ - virtual std::string coerceToString(EvalState & state, const PosIdx & pos, NixStringContext & context, bool copyMore, bool copyToStore) const; + virtual std::string coerceToString( + EvalState & state, const PosIdx & pos, NixStringContext & context, bool copyMore, bool copyToStore) const; /** * Compare to another value of the same type. Defaults to uncomparable, * i.e. always false. */ - virtual bool operator ==(const ExternalValueBase & b) const noexcept; + virtual bool operator==(const ExternalValueBase & b) const noexcept; /** * Print the value as JSON. Defaults to unconvertable, i.e. throws an error */ - virtual nlohmann::json printValueAsJSON(EvalState & state, bool strict, - NixStringContext & context, bool copyToStore = true) const; + virtual nlohmann::json + printValueAsJSON(EvalState & state, bool strict, NixStringContext & context, bool copyToStore = true) const; /** * Print the value as XML. Defaults to unevaluated */ - virtual void printValueAsXML(EvalState & state, bool strict, bool location, - XMLWriter & doc, NixStringContext & context, PathSet & drvsSeen, + virtual void printValueAsXML( + EvalState & state, + bool strict, + bool location, + XMLWriter & doc, + NixStringContext & context, + PathSet & drvsSeen, const PosIdx pos) const; - virtual ~ExternalValueBase() - { - }; + virtual ~ExternalValueBase() {}; }; -std::ostream & operator << (std::ostream & str, const ExternalValueBase & v); - +std::ostream & operator<<(std::ostream & str, const ExternalValueBase & v); class ListBuilder { const size_t size; Value * inlineElems[2] = {nullptr, nullptr}; public: - Value * * elems; + Value ** elems; ListBuilder(EvalState & state, size_t size); // NOTE: Can be noexcept because we are just copying integral values and @@ -147,48 +162,37 @@ public: : size(x.size) , inlineElems{x.inlineElems[0], x.inlineElems[1]} , elems(size <= 2 ? inlineElems : x.elems) - { } + { + } - Value * & operator [](size_t n) + Value *& operator[](size_t n) { return elems[n]; } - typedef Value * * iterator; + typedef Value ** iterator; - iterator begin() { return &elems[0]; } - iterator end() { return &elems[size]; } + iterator begin() + { + return &elems[0]; + } + iterator end() + { + return &elems[size]; + } friend struct Value; }; +namespace detail { -struct Value +/** + * Implementation mixin class for defining the public types + * In can be inherited from by the actual ValueStorage implementations + * for free due to Empty Base Class Optimization (EBCO). + */ +struct ValueBase { -private: - InternalType internalType = tUninitialized; - uint32_t pos{0}; - - friend std::string showType(const Value & v); - -public: - - void print(EvalState &state, std::ostream &str, PrintOptions options = PrintOptions {}); - - // Functions needed to distinguish the type - // These should be removed eventually, by putting the functionality that's - // needed by callers into methods of this type - - // type() == nThunk - inline bool isThunk() const { return internalType == tThunk; }; - inline bool isApp() const { return internalType == tApp; }; - inline bool isBlackhole() const; - - // type() == nFunction - inline bool isLambda() const { return internalType == tLambda; }; - inline bool isPrimOp() const { return internalType == tPrimOp; }; - inline bool isPrimOpApp() const { return internalType == tPrimOpApp; }; - /** * Strings in the evaluator carry a so-called `context` which * is a list of strings representing store paths. This is to @@ -211,56 +215,676 @@ public: * For canonicity, the store paths should be in sorted order. */ - struct StringWithContext { + struct StringWithContext + { const char * c_str; - const char * * context; // must be in sorted order + const char ** context; // must be in sorted order }; - struct Path { + struct Path + { SourceAccessor * accessor; const char * path; }; - struct ClosureThunk { + struct Null + {}; + + struct ClosureThunk + { Env * env; Expr * expr; }; - struct FunctionApplicationThunk { - Value * left, * right; + struct FunctionApplicationThunk + { + Value *left, *right; }; - struct Lambda { + /** + * Like FunctionApplicationThunk, but must be a distinct type in order to + * resolve overloads to `tPrimOpApp` instead of `tApp`. + * This type helps with the efficient implementation of arity>=2 primop calls. + */ + struct PrimOpApplicationThunk + { + Value *left, *right; + }; + + struct Lambda + { Env * env; ExprLambda * fun; }; - using Payload = union + using SmallList = std::array; + + struct List { - NixInt integer; - bool boolean; + size_t size; + Value * const * elems; + }; +}; - StringWithContext string; +template +struct PayloadTypeToInternalType +{}; - Path path; +/** + * All stored types must be distinct (not type aliases) for the purposes of + * overload resolution in setStorage. This ensures there's a bijection from + * InternalType <-> C++ type. + */ +#define NIX_VALUE_STORAGE_FOR_EACH_FIELD(MACRO) \ + MACRO(NixInt, integer, tInt) \ + MACRO(bool, boolean, tBool) \ + MACRO(ValueBase::StringWithContext, string, tString) \ + MACRO(ValueBase::Path, path, tPath) \ + MACRO(ValueBase::Null, null_, tNull) \ + MACRO(Bindings *, attrs, tAttrs) \ + MACRO(ValueBase::List, bigList, tListN) \ + MACRO(ValueBase::SmallList, smallList, tListSmall) \ + MACRO(ValueBase::ClosureThunk, thunk, tThunk) \ + MACRO(ValueBase::FunctionApplicationThunk, app, tApp) \ + MACRO(ValueBase::Lambda, lambda, tLambda) \ + MACRO(PrimOp *, primOp, tPrimOp) \ + MACRO(ValueBase::PrimOpApplicationThunk, primOpApp, tPrimOpApp) \ + MACRO(ExternalValueBase *, external, tExternal) \ + MACRO(NixFloat, fpoint, tFloat) - Bindings * attrs; - struct { - size_t size; - Value * const * elems; - } bigList; - Value * smallList[2]; - ClosureThunk thunk; - FunctionApplicationThunk app; - Lambda lambda; - PrimOp * primOp; - FunctionApplicationThunk primOpApp; - ExternalValueBase * external; - NixFloat fpoint; +#define NIX_VALUE_PAYLOAD_TYPE(T, FIELD_NAME, DISCRIMINATOR) \ + template<> \ + struct PayloadTypeToInternalType \ + { \ + static constexpr InternalType value = DISCRIMINATOR; \ }; +NIX_VALUE_STORAGE_FOR_EACH_FIELD(NIX_VALUE_PAYLOAD_TYPE) + +#undef NIX_VALUE_PAYLOAD_TYPE + +template +inline constexpr InternalType payloadTypeToInternalType = PayloadTypeToInternalType::value; + +} + +/** + * Discriminated union of types stored in the value. + * The union discriminator is @ref InternalType enumeration. + * + * This class can be specialized with a non-type template parameter + * of pointer size for more optimized data layouts on when pointer alignment + * bits can be used for storing the discriminator. + * + * All specializations of this type need to implement getStorage, setStorage and + * getInternalType methods. + */ +template +class ValueStorage : public detail::ValueBase +{ +protected: + using Payload = union + { +#define NIX_VALUE_STORAGE_DEFINE_FIELD(T, FIELD_NAME, DISCRIMINATOR) T FIELD_NAME; + NIX_VALUE_STORAGE_FOR_EACH_FIELD(NIX_VALUE_STORAGE_DEFINE_FIELD) +#undef NIX_VALUE_STORAGE_DEFINE_FIELD + }; + +private: + InternalType internalType = tUninitialized; Payload payload; +protected: +#define NIX_VALUE_STORAGE_GET_IMPL(K, FIELD_NAME, DISCRIMINATOR) \ + void getStorage(K & val) const noexcept \ + { \ + assert(internalType == DISCRIMINATOR); \ + val = payload.FIELD_NAME; \ + } + +#define NIX_VALUE_STORAGE_SET_IMPL(K, FIELD_NAME, DISCRIMINATOR) \ + void setStorage(K val) noexcept \ + { \ + payload.FIELD_NAME = val; \ + internalType = DISCRIMINATOR; \ + } + + NIX_VALUE_STORAGE_FOR_EACH_FIELD(NIX_VALUE_STORAGE_GET_IMPL) + NIX_VALUE_STORAGE_FOR_EACH_FIELD(NIX_VALUE_STORAGE_SET_IMPL) + +#undef NIX_VALUE_STORAGE_SET_IMPL +#undef NIX_VALUE_STORAGE_GET_IMPL +#undef NIX_VALUE_STORAGE_FOR_EACH_FIELD + + /** Get internal type currently occupying the storage. */ + InternalType getInternalType() const noexcept + { + return internalType; + } +}; + +namespace detail { + +/* Whether to use a specialization of ValueStorage that does bitpacking into + alignment niches. */ +template +inline constexpr bool useBitPackedValueStorage = (ptrSize == 8) && (__STDCPP_DEFAULT_NEW_ALIGNMENT__ >= 8); + +} // namespace detail + +/** + * Value storage that is optimized for 64 bit systems. + * Packs discriminator bits into the pointer alignment niches. + */ +template +class ValueStorage>> : public detail::ValueBase +{ + /* Needs a dependent type name in order for member functions (and + * potentially ill-formed bit casts) to be SFINAE'd out. + * + * Otherwise some member functions could possibly be instantiated for 32 bit + * systems and fail due to an unsatisfied constraint. + */ + template + struct PackedPointerTypeStruct + { + using type = std::uint64_t; + }; + + using PackedPointer = typename PackedPointerTypeStruct::type; + using Payload = std::array; + Payload payload = {}; + + static constexpr int discriminatorBits = 3; + static constexpr PackedPointer discriminatorMask = (PackedPointer(1) << discriminatorBits) - 1; + + /** + * The value is stored as a pair of 8-byte double words. All pointers are assumed + * to be 8-byte aligned. This gives us at most 6 bits of discriminator bits + * of free storage. In some cases when one double word can't be tagged the whole + * discriminator is stored in the first double word. + * + * The layout of discriminator bits is determined by the 3 bits of PrimaryDiscriminator, + * which are always stored in the lower 3 bits of the first dword of the payload. + * The memory layout has 3 types depending on the PrimaryDiscriminator value. + * + * PrimaryDiscriminator::pdSingleDWord - Only the second dword carries the data. + * That leaves the first 8 bytes free for storing the InternalType in the upper + * bits. + * + * PrimaryDiscriminator::pdListN - pdPath - Only has 3 available padding bits + * because: + * - tListN needs a size, whose lower bits we can't borrow. + * - tString and tPath have C-string fields, which don't necessarily need to + * be aligned. + * + * In this case we reserve their discriminators directly in the PrimaryDiscriminator + * bits stored in payload[0]. + * + * PrimaryDiscriminator::pdPairOfPointers - Payloads that consist of a pair of pointers. + * In this case the 3 lower bits of payload[1] can be tagged. + * + * The primary discriminator with value 0 is reserved for uninitialized Values, + * which are useful for diagnostics in C bindings. + */ + enum PrimaryDiscriminator : int { + pdUninitialized = 0, + pdSingleDWord, //< layout: Single/zero field payload + /* The order of these enumations must be the same as in InternalType. */ + pdListN, //< layout: Single untaggable field. + pdString, + pdPath, + pdPairOfPointers, //< layout: Pair of pointers payload + }; + + template + requires std::is_pointer_v + static T untagPointer(PackedPointer val) noexcept + { + return std::bit_cast(val & ~discriminatorMask); + } + + PrimaryDiscriminator getPrimaryDiscriminator() const noexcept + { + return static_cast(payload[0] & discriminatorMask); + } + + static void assertAligned(PackedPointer val) noexcept + { + assert((val & discriminatorMask) == 0 && "Pointer is not 8 bytes aligned"); + } + + template + void setSingleDWordPayload(PackedPointer untaggedVal) noexcept + { + /* There's plenty of free upper bits in the first dword, which is + used only for the discriminator. */ + payload[0] = static_cast(pdSingleDWord) | (static_cast(type) << discriminatorBits); + payload[1] = untaggedVal; + } + + template + void setUntaggablePayload(T * firstPtrField, U untaggableField) noexcept + { + static_assert(discriminator >= pdListN && discriminator <= pdPath); + auto firstFieldPayload = std::bit_cast(firstPtrField); + assertAligned(firstFieldPayload); + payload[0] = static_cast(discriminator) | firstFieldPayload; + payload[1] = std::bit_cast(untaggableField); + } + + template + void setPairOfPointersPayload(T * firstPtrField, U * secondPtrField) noexcept + { + static_assert(type >= tListSmall && type <= tLambda); + { + auto firstFieldPayload = std::bit_cast(firstPtrField); + assertAligned(firstFieldPayload); + payload[0] = static_cast(pdPairOfPointers) | firstFieldPayload; + } + { + auto secondFieldPayload = std::bit_cast(secondPtrField); + assertAligned(secondFieldPayload); + payload[1] = (type - tListSmall) | secondFieldPayload; + } + } + + template + requires std::is_pointer_v && std::is_pointer_v + void getPairOfPointersPayload(T & firstPtrField, U & secondPtrField) const noexcept + { + firstPtrField = untagPointer(payload[0]); + secondPtrField = untagPointer(payload[1]); + } + +protected: + /** Get internal type currently occupying the storage. */ + InternalType getInternalType() const noexcept + { + switch (auto pd = getPrimaryDiscriminator()) { + case pdUninitialized: + /* Discriminator value of zero is used to distinguish uninitialized values. */ + return tUninitialized; + case pdSingleDWord: + /* Payloads that only use up a single double word store the InternalType + in the upper bits of the first double word. */ + return InternalType(payload[0] >> discriminatorBits); + /* The order must match that of the enumerations defined in InternalType. */ + case pdListN: + case pdString: + case pdPath: + return static_cast(tListN + (pd - pdListN)); + case pdPairOfPointers: + return static_cast(tListSmall + (payload[1] & discriminatorMask)); + [[unlikely]] default: + unreachable(); + } + } + +#define NIX_VALUE_STORAGE_DEF_PAIR_OF_PTRS(TYPE, MEMBER_A, MEMBER_B) \ + \ + void getStorage(TYPE & val) const noexcept \ + { \ + getPairOfPointersPayload(val MEMBER_A, val MEMBER_B); \ + } \ + \ + void setStorage(TYPE val) noexcept \ + { \ + setPairOfPointersPayload>(val MEMBER_A, val MEMBER_B); \ + } + + NIX_VALUE_STORAGE_DEF_PAIR_OF_PTRS(SmallList, [0], [1]) + NIX_VALUE_STORAGE_DEF_PAIR_OF_PTRS(PrimOpApplicationThunk, .left, .right) + NIX_VALUE_STORAGE_DEF_PAIR_OF_PTRS(FunctionApplicationThunk, .left, .right) + NIX_VALUE_STORAGE_DEF_PAIR_OF_PTRS(ClosureThunk, .env, .expr) + NIX_VALUE_STORAGE_DEF_PAIR_OF_PTRS(Lambda, .env, .fun) + +#undef NIX_VALUE_STORAGE_DEF_PAIR_OF_PTRS + + void getStorage(NixInt & integer) const noexcept + { + /* PackedPointerType -> int64_t here is well-formed, since the standard requires + this conversion to follow 2's complement rules. This is just a no-op. */ + integer = NixInt(payload[1]); + } + + void getStorage(bool & boolean) const noexcept + { + boolean = payload[1]; + } + + void getStorage(Null & null) const noexcept {} + + void getStorage(NixFloat & fpoint) const noexcept + { + fpoint = std::bit_cast(payload[1]); + } + + void getStorage(ExternalValueBase *& external) const noexcept + { + external = std::bit_cast(payload[1]); + } + + void getStorage(PrimOp *& primOp) const noexcept + { + primOp = std::bit_cast(payload[1]); + } + + void getStorage(Bindings *& attrs) const noexcept + { + attrs = std::bit_cast(payload[1]); + } + + void getStorage(List & list) const noexcept + { + list.elems = untagPointer(payload[0]); + list.size = payload[1]; + } + + void getStorage(StringWithContext & string) const noexcept + { + string.context = untagPointer(payload[0]); + string.c_str = std::bit_cast(payload[1]); + } + + void getStorage(Path & path) const noexcept + { + path.accessor = untagPointer(payload[0]); + path.path = std::bit_cast(payload[1]); + } + + void setStorage(NixInt integer) noexcept + { + setSingleDWordPayload(integer.value); + } + + void setStorage(bool boolean) noexcept + { + setSingleDWordPayload(boolean); + } + + void setStorage(Null path) noexcept + { + setSingleDWordPayload(0); + } + + void setStorage(NixFloat fpoint) noexcept + { + setSingleDWordPayload(std::bit_cast(fpoint)); + } + + void setStorage(ExternalValueBase * external) noexcept + { + setSingleDWordPayload(std::bit_cast(external)); + } + + void setStorage(PrimOp * primOp) noexcept + { + setSingleDWordPayload(std::bit_cast(primOp)); + } + + void setStorage(Bindings * bindings) noexcept + { + setSingleDWordPayload(std::bit_cast(bindings)); + } + + void setStorage(List list) noexcept + { + setUntaggablePayload(list.elems, list.size); + } + + void setStorage(StringWithContext string) noexcept + { + setUntaggablePayload(string.context, string.c_str); + } + + void setStorage(Path path) noexcept + { + setUntaggablePayload(path.accessor, path.path); + } +}; + +/** + * View into a list of Value * that is itself immutable. + * + * Since not all representations of ValueStorage can provide + * a pointer to a const array of Value * this proxy class either + * stores the small list inline or points to the big list. + */ +class ListView +{ + using SpanType = std::span; + using SmallList = detail::ValueBase::SmallList; + using List = detail::ValueBase::List; + + std::variant raw; + +public: + ListView(SmallList list) + : raw(list) + { + } + + ListView(List list) + : raw(list) + { + } + + Value * const * data() const & noexcept + { + return std::visit( + overloaded{ + [](const SmallList & list) { return list.data(); }, [](const List & list) { return list.elems; }}, + raw); + } + + std::size_t size() const noexcept + { + return std::visit( + overloaded{ + [](const SmallList & list) -> std::size_t { return list.back() == nullptr ? 1 : 2; }, + [](const List & list) -> std::size_t { return list.size; }}, + raw); + } + + Value * operator[](std::size_t i) const noexcept + { + return data()[i]; + } + + SpanType span() const & + { + return SpanType(data(), size()); + } + + /* Ensure that no dangling views can be created accidentally, as that + would lead to hard to diagnose bugs that only affect small lists. */ + SpanType span() && = delete; + Value * const * data() && noexcept = delete; + + /** + * Random-access iterator that only allows iterating over a constant range + * of mutable Value pointers. + * + * @note Not a pointer to minimize potential misuses and implicitly relying + * on the iterator being a pointer. + **/ + class iterator + { + public: + using value_type = Value *; + using pointer = const value_type *; + using reference = const value_type &; + using difference_type = std::ptrdiff_t; + using iterator_category = std::random_access_iterator_tag; + + private: + pointer ptr = nullptr; + + friend class ListView; + + iterator(pointer ptr) + : ptr(ptr) + { + } + + public: + iterator() = default; + + reference operator*() const + { + return *ptr; + } + + const value_type * operator->() const + { + return ptr; + } + + reference operator[](difference_type diff) const + { + return ptr[diff]; + } + + iterator & operator++() + { + ++ptr; + return *this; + } + + iterator operator++(int) + { + pointer tmp = ptr; + ++*this; + return iterator(tmp); + } + + iterator & operator--() + { + --ptr; + return *this; + } + + iterator operator--(int) + { + pointer tmp = ptr; + --*this; + return iterator(tmp); + } + + iterator & operator+=(difference_type diff) + { + ptr += diff; + return *this; + } + + iterator operator+(difference_type diff) const + { + return iterator(ptr + diff); + } + + friend iterator operator+(difference_type diff, const iterator & rhs) + { + return iterator(diff + rhs.ptr); + } + + iterator & operator-=(difference_type diff) + { + ptr -= diff; + return *this; + } + + iterator operator-(difference_type diff) const + { + return iterator(ptr - diff); + } + + difference_type operator-(const iterator & rhs) const + { + return ptr - rhs.ptr; + } + + std::strong_ordering operator<=>(const iterator & rhs) const = default; + }; + + using const_iterator = iterator; + + iterator begin() const & + { + return data(); + } + + iterator end() const & + { + return data() + size(); + } + + /* Ensure that no dangling iterators can be created accidentally, as that + would lead to hard to diagnose bugs that only affect small lists. */ + iterator begin() && = delete; + iterator end() && = delete; +}; + +static_assert(std::random_access_iterator); + +struct Value : public ValueStorage +{ + friend std::string showType(const Value & v); + + template + bool isa() const noexcept + { + return ((getInternalType() == discriminator) || ...); + } + + template + T getStorage() const noexcept + { + if (getInternalType() != detail::payloadTypeToInternalType) [[unlikely]] + unreachable(); + T out; + ValueStorage::getStorage(out); + return out; + } + +public: + + /** + * Never modify the backing `Value` object! + */ + static Value * toPtr(SymbolStr str) noexcept; + + void print(EvalState & state, std::ostream & str, PrintOptions options = PrintOptions{}); + + // Functions needed to distinguish the type + // These should be removed eventually, by putting the functionality that's + // needed by callers into methods of this type + + // type() == nThunk + inline bool isThunk() const + { + return isa(); + }; + inline bool isApp() const + { + return isa(); + }; + inline bool isBlackhole() const; + + // type() == nFunction + inline bool isLambda() const + { + return isa(); + }; + inline bool isPrimOp() const + { + return isa(); + }; + inline bool isPrimOpApp() const + { + return isa(); + }; + /** * Returns the normal type of a Value. This only returns nThunk if * the Value hasn't been forceValue'd @@ -270,19 +894,35 @@ public: */ inline ValueType type(bool invalidIsThunk = false) const { - switch (internalType) { - case tUninitialized: break; - case tInt: return nInt; - case tBool: return nBool; - case tString: return nString; - case tPath: return nPath; - case tNull: return nNull; - case tAttrs: return nAttrs; - case tList1: case tList2: case tListN: return nList; - case tLambda: case tPrimOp: case tPrimOpApp: return nFunction; - case tExternal: return nExternal; - case tFloat: return nFloat; - case tThunk: case tApp: return nThunk; + switch (getInternalType()) { + case tUninitialized: + break; + case tInt: + return nInt; + case tBool: + return nBool; + case tString: + return nString; + case tPath: + return nPath; + case tNull: + return nNull; + case tAttrs: + return nAttrs; + case tListSmall: + case tListN: + return nList; + case tLambda: + case tPrimOp: + case tPrimOpApp: + return nFunction; + case tExternal: + return nExternal; + case tFloat: + return nFloat; + case tThunk: + case tApp: + return nThunk; } if (invalidIsThunk) return nThunk; @@ -290,41 +930,34 @@ public: unreachable(); } - inline void finishValue(InternalType newType, Payload newPayload, uint32_t newPos = 0) - { - payload = newPayload; - internalType = newType; - pos = newPos; - } - /** * A value becomes valid when it is initialized. We don't use this * in the evaluator; only in the bindings, where the slight extra * cost is warranted because of inexperienced callers. */ - inline bool isValid() const + inline bool isValid() const noexcept { - return internalType != tUninitialized; + return !isa(); } - inline void mkInt(NixInt::Inner n) + inline void mkInt(NixInt::Inner n) noexcept { mkInt(NixInt{n}); } - inline void mkInt(NixInt n) + inline void mkInt(NixInt n) noexcept { - finishValue(tInt, { .integer = n }); + setStorage(NixInt{n}); } - inline void mkBool(bool b) + inline void mkBool(bool b) noexcept { - finishValue(tBool, { .boolean = b }); + setStorage(b); } - inline void mkString(const char * s, const char * * context = 0) + inline void mkString(const char * s, const char ** context = 0) noexcept { - finishValue(tString, { .string = { .c_str = s, .context = context } }); + setStorage(StringWithContext{.c_str = s, .context = context}); } void mkString(std::string_view s); @@ -333,63 +966,58 @@ public: void mkStringMove(const char * s, const NixStringContext & context); - inline void mkString(const SymbolStr & s) - { - mkString(s.c_str()); - } - void mkPath(const SourcePath & path); void mkPath(std::string_view path); - inline void mkPath(SourceAccessor * accessor, const char * path, uint32_t pos) + inline void mkPath(SourceAccessor * accessor, const char * path) noexcept { - finishValue(tPath, { .path = { .accessor = accessor, .path = path } }, pos); + setStorage(Path{.accessor = accessor, .path = path}); } - inline void mkNull() + inline void mkNull() noexcept { - finishValue(tNull, {}); + setStorage(Null{}); } - inline void mkAttrs(Bindings * a) + inline void mkAttrs(Bindings * a) noexcept { - finishValue(tAttrs, { .attrs = a }); + setStorage(a); } Value & mkAttrs(BindingsBuilder & bindings); - void mkList(const ListBuilder & builder) + void mkList(const ListBuilder & builder) noexcept { if (builder.size == 1) - finishValue(tList1, { .smallList = { builder.inlineElems[0] } }); + setStorage(std::array{builder.inlineElems[0], nullptr}); else if (builder.size == 2) - finishValue(tList2, { .smallList = { builder.inlineElems[0], builder.inlineElems[1] } }); + setStorage(std::array{builder.inlineElems[0], builder.inlineElems[1]}); else - finishValue(tListN, { .bigList = { .size = builder.size, .elems = builder.elems } }); + setStorage(List{.size = builder.size, .elems = builder.elems}); } - inline void mkThunk(Env * e, Expr * ex) + inline void mkThunk(Env * e, Expr * ex) noexcept { - finishValue(tThunk, { .thunk = { .env = e, .expr = ex } }); + setStorage(ClosureThunk{.env = e, .expr = ex}); } - inline void mkApp(Value * l, Value * r) + inline void mkApp(Value * l, Value * r) noexcept { - finishValue(tApp, { .app = { .left = l, .right = r } }); + setStorage(FunctionApplicationThunk{.left = l, .right = r}); } - inline void mkLambda(Env * e, ExprLambda * f) + inline void mkLambda(Env * e, ExprLambda * f) noexcept { - finishValue(tLambda, { .lambda = { .env = e, .fun = f } }); + setStorage(Lambda{.env = e, .fun = f}); } inline void mkBlackhole(); void mkPrimOp(PrimOp * p); - inline void mkPrimOpApp(Value * l, Value * r) + inline void mkPrimOpApp(Value * l, Value * r) noexcept { - finishValue(tPrimOpApp, { .primOpApp = { .left = l, .right = r } }); + setStorage(PrimOpApplicationThunk{.left = l, .right = r}); } /** @@ -397,40 +1025,29 @@ public: */ const PrimOp * primOpAppPrimOp() const; - inline void mkExternal(ExternalValueBase * e) + inline void mkExternal(ExternalValueBase * e) noexcept { - finishValue(tExternal, { .external = e }); + setStorage(e); } - inline void mkFloat(NixFloat n) + inline void mkFloat(NixFloat n) noexcept { - finishValue(tFloat, { .fpoint = n }); + setStorage(n); } - bool isList() const + bool isList() const noexcept { - return internalType == tList1 || internalType == tList2 || internalType == tListN; + return isa(); } - Value * const * listElems() + ListView listView() const noexcept { - return internalType == tList1 || internalType == tList2 ? payload.smallList : payload.bigList.elems; + return isa() ? ListView(getStorage()) : ListView(getStorage()); } - std::span listItems() const + size_t listSize() const noexcept { - assert(isList()); - return std::span(listElems(), listSize()); - } - - Value * const * listElems() const - { - return internalType == tList1 || internalType == tList2 ? payload.smallList : payload.bigList.elems; - } - - size_t listSize() const - { - return internalType == tList1 ? 1 : internalType == tList2 ? 2 : payload.bigList.size; + return isa() ? (getStorage()[1] == nullptr ? 1 : 2) : getStorage().size; } PosIdx determinePos(const PosIdx pos) const; @@ -444,57 +1061,90 @@ public: SourcePath path() const { - assert(internalType == tPath); - return SourcePath( - ref(payload.path.accessor->shared_from_this()), - CanonPath(CanonPath::unchecked_t(), payload.path.path)); + return SourcePath(ref(pathAccessor()->shared_from_this()), CanonPath(CanonPath::unchecked_t(), pathStr())); } - std::string_view string_view() const + std::string_view string_view() const noexcept { - assert(internalType == tString); - return std::string_view(payload.string.c_str); + return std::string_view(getStorage().c_str); } - const char * c_str() const + const char * c_str() const noexcept { - assert(internalType == tString); - return payload.string.c_str; + return getStorage().c_str; } - const char * * context() const + const char ** context() const noexcept { - return payload.string.context; + return getStorage().context; } - ExternalValueBase * external() const - { return payload.external; } + ExternalValueBase * external() const noexcept + { + return getStorage(); + } - const Bindings * attrs() const - { return payload.attrs; } + const Bindings * attrs() const noexcept + { + return getStorage(); + } - const PrimOp * primOp() const - { return payload.primOp; } + const PrimOp * primOp() const noexcept + { + return getStorage(); + } - bool boolean() const - { return payload.boolean; } + bool boolean() const noexcept + { + return getStorage(); + } - NixInt integer() const - { return payload.integer; } + NixInt integer() const noexcept + { + return getStorage(); + } - NixFloat fpoint() const - { return payload.fpoint; } + NixFloat fpoint() const noexcept + { + return getStorage(); + } - inline uint32_t getPos() const - { return pos; } + Lambda lambda() const noexcept + { + return getStorage(); + } + + ClosureThunk thunk() const noexcept + { + return getStorage(); + } + + PrimOpApplicationThunk primOpApp() const noexcept + { + return getStorage(); + } + + FunctionApplicationThunk app() const noexcept + { + return getStorage(); + } + + const char * pathStr() const noexcept + { + return getStorage().path; + } + + SourceAccessor * pathAccessor() const noexcept + { + return getStorage().accessor; + } }; - extern ExprBlackHole eBlackHole; bool Value::isBlackhole() const { - return internalType == tThunk && payload.thunk.expr == (Expr*) &eBlackHole; + return isThunk() && thunk().expr == (Expr *) &eBlackHole; } void Value::mkBlackhole() @@ -502,11 +1152,16 @@ void Value::mkBlackhole() mkThunk(nullptr, (Expr *) &eBlackHole); } - typedef std::vector> ValueVector; -typedef std::unordered_map, std::equal_to, traceable_allocator>> ValueMap; -typedef std::map, traceable_allocator>> ValueVectorMap; - +typedef std::unordered_map< + Symbol, + Value *, + std::hash, + std::equal_to, + traceable_allocator>> + ValueMap; +typedef std::map, traceable_allocator>> + ValueVectorMap; /** * A value allocated in traceable memory. @@ -516,5 +1171,4 @@ typedef std::shared_ptr RootValue; RootValue allocRootValue(Value * v); void forceNoNullByte(std::string_view s, std::function = nullptr); - } diff --git a/src/libexpr/lexer-helpers.hh b/src/libexpr/lexer-helpers.hh index d40f7b874..225eb157a 100644 --- a/src/libexpr/lexer-helpers.hh +++ b/src/libexpr/lexer-helpers.hh @@ -2,7 +2,7 @@ #include -// inluding the generated headers twice leads to errors +// including the generated headers twice leads to errors #ifndef BISON_HEADER # include "lexer-tab.hh" # include "parser-tab.hh" diff --git a/src/libexpr/meson.build b/src/libexpr/meson.build index 2b465b85a..f5adafae0 100644 --- a/src/libexpr/meson.build +++ b/src/libexpr/meson.build @@ -140,6 +140,8 @@ sources = files( 'eval-cache.cc', 'eval-error.cc', 'eval-gc.cc', + 'eval-profiler-settings.cc', + 'eval-profiler.cc', 'eval-settings.cc', 'eval.cc', 'function-trace.cc', diff --git a/src/libexpr/nixexpr.cc b/src/libexpr/nixexpr.cc index 1a71096d4..92071b22d 100644 --- a/src/libexpr/nixexpr.cc +++ b/src/libexpr/nixexpr.cc @@ -606,7 +606,7 @@ void ExprLambda::setDocComment(DocComment docComment) { size_t SymbolTable::totalSize() const { size_t n = 0; - dump([&] (const std::string & s) { n += s.size(); }); + dump([&] (SymbolStr s) { n += s.size(); }); return n; } diff --git a/src/libexpr/parser.y b/src/libexpr/parser.y index e9be2837c..8878b86c2 100644 --- a/src/libexpr/parser.y +++ b/src/libexpr/parser.y @@ -374,8 +374,8 @@ path_start root filesystem accessor, rather than the accessor of the current Nix expression. */ literal.front() == '/' - ? new ExprPath(state->rootFS, std::move(path), CUR_POS) - : new ExprPath(state->basePath.accessor, std::move(path), CUR_POS); + ? new ExprPath(state->rootFS, std::move(path)) + : new ExprPath(state->basePath.accessor, std::move(path)); } | HPATH { if (state->settings.pureEval) { @@ -385,7 +385,7 @@ path_start ); } Path path(getHome() + std::string($1.p + 1, $1.l - 1)); - $$ = new ExprPath(ref(state->rootFS), std::move(path), CUR_POS); + $$ = new ExprPath(ref(state->rootFS), std::move(path)); } ; diff --git a/src/libexpr/paths.cc b/src/libexpr/paths.cc index 38ded067a..438de1d88 100644 --- a/src/libexpr/paths.cc +++ b/src/libexpr/paths.cc @@ -24,7 +24,11 @@ StorePath EvalState::devirtualize(const StorePath & path, StringMap * rewrites) { if (auto mount = storeFS->getMount(CanonPath(store->printStorePath(path)))) { auto storePath = fetchToStore( - *store, SourcePath{ref(mount)}, settings.readOnlyMode ? FetchMode::DryRun : FetchMode::Copy, path.name()); + fetchSettings, + *store, + SourcePath{ref(mount)}, + settings.readOnlyMode ? FetchMode::DryRun : FetchMode::Copy, + path.name()); assert(storePath.name() == path.name()); if (rewrites) rewrites->emplace(path.hashPart(), storePath.hashPart()); @@ -57,13 +61,12 @@ std::string EvalState::computeBaseName(const SourcePath & path, PosIdx pos) if (path.accessor == rootFS) { if (auto storePath = store->maybeParseStorePath(path.path.abs())) { warn( - "Copying '%s' to the store again\n" + "Copying '%s' to the store again.\n" "You can make Nix evaluate faster and copy fewer files by replacing `./.` with the `self` flake input, " - "or `builtins.path { path = ./.; name = \"source\"; }`\n\n" - "Location: %s\n", - path, - positions[pos]); - return std::string(fetchToStore(*store, path, FetchMode::DryRun, storePath->name()).to_string()); + "or `builtins.path { path = ./.; name = \"source\"; }`.\n", + path); + return std::string( + fetchToStore(fetchSettings, *store, path, FetchMode::DryRun, storePath->name()).to_string()); } } return std::string(path.baseName()); @@ -72,8 +75,9 @@ std::string EvalState::computeBaseName(const SourcePath & path, PosIdx pos) StorePath EvalState::mountInput( fetchers::Input & input, const fetchers::Input & originalInput, ref accessor, bool requireLockable) { - auto storePath = settings.lazyTrees ? StorePath::random(input.getName()) - : fetchToStore(*store, accessor, FetchMode::Copy, input.getName()); + auto storePath = settings.lazyTrees + ? StorePath::random(input.getName()) + : fetchToStore(fetchSettings, *store, accessor, FetchMode::Copy, input.getName()); allowPath(storePath); // FIXME: should just whitelist the entire virtual store @@ -84,7 +88,7 @@ StorePath EvalState::mountInput( if (store->isValidPath(storePath)) _narHash = store->queryPathInfo(storePath)->narHash; else - _narHash = fetchToStore2(*store, accessor, FetchMode::DryRun, input.getName()).second; + _narHash = fetchToStore2(fetchSettings, *store, accessor, FetchMode::DryRun, input.getName()).second; } return _narHash; }; diff --git a/src/libexpr/primops.cc b/src/libexpr/primops.cc index 825532413..f510a66ed 100644 --- a/src/libexpr/primops.cc +++ b/src/libexpr/primops.cc @@ -14,6 +14,7 @@ #include "nix/expr/value-to-xml.hh" #include "nix/expr/primops.hh" #include "nix/fetchers/fetch-to-store.hh" +#include "nix/util/sort.hh" #include "nix/util/mounted-source-accessor.hh" #include @@ -421,7 +422,7 @@ void prim_importNative(EvalState & state, const PosIdx pos, Value * * args, Valu void prim_exec(EvalState & state, const PosIdx pos, Value * * args, Value & v) { state.forceList(*args[0], pos, "while evaluating the first argument passed to builtins.exec"); - auto elems = args[0]->listElems(); + auto elems = args[0]->listView(); auto count = args[0]->listSize(); if (count == 0) state.error("at least one argument to 'exec' required").atPos(pos).debugThrow(); @@ -430,7 +431,7 @@ void prim_exec(EvalState & state, const PosIdx pos, Value * * args, Value & v) "while evaluating the first element of the argument passed to builtins.exec", false, false).toOwned(); Strings commandArgs; - for (unsigned int i = 1; i < args[0]->listSize(); ++i) { + for (size_t i = 1; i < count; ++i) { commandArgs.push_back( state.coerceToString(pos, *elems[i], context, "while evaluating an element of the argument passed to builtins.exec", @@ -658,7 +659,7 @@ struct CompareValues // Note: we don't take the accessor into account // since it's not obvious how to compare them in a // reproducible way. - return strcmp(v1->payload.path.path, v2->payload.path.path) < 0; + return strcmp(v1->pathStr(), v2->pathStr()) < 0; case nList: // Lexicographic comparison for (size_t i = 0;; i++) { @@ -666,8 +667,8 @@ struct CompareValues return false; } else if (i == v1->listSize()) { return true; - } else if (!state.eqValues(*v1->listElems()[i], *v2->listElems()[i], pos, errorCtx)) { - return (*this)(v1->listElems()[i], v2->listElems()[i], "while comparing two list elements"); + } else if (!state.eqValues(*v1->listView()[i], *v2->listView()[i], pos, errorCtx)) { + return (*this)(v1->listView()[i], v2->listView()[i], "while comparing two list elements"); } } default: @@ -685,31 +686,17 @@ struct CompareValues typedef std::list> ValueList; - -static Bindings::const_iterator getAttr( - EvalState & state, - Symbol attrSym, - const Bindings * attrSet, - std::string_view errorCtx) -{ - auto value = attrSet->find(attrSym); - if (value == attrSet->end()) { - state.error("attribute '%s' missing", state.symbols[attrSym]).withTrace(noPos, errorCtx).debugThrow(); - } - return value; -} - static void prim_genericClosure(EvalState & state, const PosIdx pos, Value * * args, Value & v) { state.forceAttrs(*args[0], noPos, "while evaluating the first argument passed to builtins.genericClosure"); /* Get the start set. */ - auto startSet = getAttr(state, state.sStartSet, args[0]->attrs(), "in the attrset passed as argument to builtins.genericClosure"); + auto startSet = state.getAttr(state.sStartSet, args[0]->attrs(), "in the attrset passed as argument to builtins.genericClosure"); state.forceList(*startSet->value, noPos, "while evaluating the 'startSet' attribute passed as argument to builtins.genericClosure"); ValueList workSet; - for (auto elem : startSet->value->listItems()) + for (auto elem : startSet->value->listView()) workSet.push_back(elem); if (startSet->value->listSize() == 0) { @@ -718,7 +705,7 @@ static void prim_genericClosure(EvalState & state, const PosIdx pos, Value * * a } /* Get the operator. */ - auto op = getAttr(state, state.sOperator, args[0]->attrs(), "in the attrset passed as argument to builtins.genericClosure"); + auto op = state.getAttr(state.sOperator, args[0]->attrs(), "in the attrset passed as argument to builtins.genericClosure"); state.forceFunction(*op->value, noPos, "while evaluating the 'operator' attribute passed as argument to builtins.genericClosure"); /* Construct the closure by applying the operator to elements of @@ -735,7 +722,7 @@ static void prim_genericClosure(EvalState & state, const PosIdx pos, Value * * a state.forceAttrs(*e, noPos, "while evaluating one of the elements generated by (or initially passed to) builtins.genericClosure"); - auto key = getAttr(state, state.sKey, e->attrs(), "in one of the attrsets generated by (or initially passed to) builtins.genericClosure"); + auto key = state.getAttr(state.sKey, e->attrs(), "in one of the attrsets generated by (or initially passed to) builtins.genericClosure"); state.forceValue(*key->value, noPos); if (!doneKeys.insert(key->value).second) continue; @@ -747,7 +734,7 @@ static void prim_genericClosure(EvalState & state, const PosIdx pos, Value * * a state.forceList(newElements, noPos, "while evaluating the return value of the `operator` passed to builtins.genericClosure"); /* Add the values returned by the operator to the work set. */ - for (auto elem : newElements.listItems()) { + for (auto elem : newElements.listView()) { state.forceValue(*elem, noPos); // "while evaluating one one of the elements returned by the `operator` passed to builtins.genericClosure"); workSet.push_back(elem); } @@ -919,7 +906,7 @@ static void prim_ceil(EvalState & state, const PosIdx pos, Value * * args, Value auto arg = args[0]->integer(); auto res = v.integer(); if (arg != res) { - state.error("Due to a bug (see https://github.com/NixOS/nix/issues/12899) a loss of precision occured in previous Nix versions because the NixInt argument %1% was rounded to %2%.\n\tFuture Nix versions might implement the correct behavior.", arg, res).atPos(pos).debugThrow(); + state.error("Due to a bug (see https://github.com/NixOS/nix/issues/12899) a loss of precision occurred in previous Nix versions because the NixInt argument %1% was rounded to %2%.\n\tFuture Nix versions might implement the correct behavior.", arg, res).atPos(pos).debugThrow(); } } } @@ -960,7 +947,7 @@ static void prim_floor(EvalState & state, const PosIdx pos, Value * * args, Valu auto arg = args[0]->integer(); auto res = v.integer(); if (arg != res) { - state.error("Due to a bug (see https://github.com/NixOS/nix/issues/12899) a loss of precision occured in previous Nix versions because the NixInt argument %1% was rounded to %2%.\n\tFuture Nix versions might implement the correct behavior.", arg, res).atPos(pos).debugThrow(); + state.error("Due to a bug (see https://github.com/NixOS/nix/issues/12899) a loss of precision occurred in previous Nix versions because the NixInt argument %1% was rounded to %2%.\n\tFuture Nix versions might implement the correct behavior.", arg, res).atPos(pos).debugThrow(); } } } @@ -994,7 +981,7 @@ static void prim_tryEval(EvalState & state, const PosIdx pos, Value * * args, Va ReplExitStatus (* savedDebugRepl)(ref es, const ValMap & extraEnv) = nullptr; if (state.debugRepl && state.settings.ignoreExceptionsDuringTry) { - /* to prevent starting the repl from exceptions withing a tryEval, null it. */ + /* to prevent starting the repl from exceptions within a tryEval, null it. */ savedDebugRepl = state.debugRepl; state.debugRepl = nullptr; } @@ -1200,7 +1187,7 @@ static void prim_second(EvalState & state, const PosIdx pos, Value * * args, Val static void derivationStrictInternal( EvalState & state, - const std::string & name, + std::string_view name, const Bindings * attrs, Value & v); @@ -1218,9 +1205,9 @@ static void prim_derivationStrict(EvalState & state, const PosIdx pos, Value * * auto attrs = args[0]->attrs(); /* Figure out the name first (for stack backtraces). */ - auto nameAttr = getAttr(state, state.sName, attrs, "in the attrset passed as argument to builtins.derivationStrict"); + auto nameAttr = state.getAttr(state.sName, attrs, "in the attrset passed as argument to builtins.derivationStrict"); - std::string drvName; + std::string_view drvName; try { drvName = state.forceStringNoCtx(*nameAttr->value, pos, "while evaluating the `name` attribute passed to builtins.derivationStrict"); } catch (Error & e) { @@ -1279,7 +1266,7 @@ static void checkDerivationName(EvalState & state, std::string_view drvName) static void derivationStrictInternal( EvalState & state, - const std::string & drvName, + std::string_view drvName, const Bindings * attrs, Value & v) { @@ -1387,7 +1374,7 @@ static void derivationStrictInternal( command-line arguments to the builder. */ else if (i->name == state.sArgs) { state.forceList(*i->value, pos, context_below); - for (auto elem : i->value->listItems()) { + for (auto elem : i->value->listView()) { auto s = state.coerceToString(pos, *elem, context, "while evaluating an element of the argument list", true).toOwned(); @@ -1419,7 +1406,7 @@ static void derivationStrictInternal( /* Require ‘outputs’ to be a list of strings. */ state.forceList(*i->value, pos, context_below); Strings ss; - for (auto elem : i->value->listItems()) + for (auto elem : i->value->listView()) ss.emplace_back(state.forceStringNoCtx(*elem, pos, context_below)); handleOutputs(ss); } @@ -1448,6 +1435,8 @@ static void derivationStrictInternal( else if (i->name == state.sOutputHashMode) handleHashMode(s); else if (i->name == state.sOutputs) handleOutputs(tokenizeString(s)); + else if (i->name == state.sJson) + warn("In derivation '%s': setting structured attributes via '__json' is deprecated, and may be disallowed in future versions of Nix. Set '__structuredAttrs = true' instead.", drvName); } } @@ -1917,7 +1906,7 @@ static void prim_findFile(EvalState & state, const PosIdx pos, Value * * args, V LookupPath lookupPath; - for (auto v2 : args[0]->listItems()) { + for (auto v2 : args[0]->listView()) { state.forceAttrs(*v2, pos, "while evaluating an element of the list passed to builtins.findFile"); std::string prefix; @@ -1925,7 +1914,7 @@ static void prim_findFile(EvalState & state, const PosIdx pos, Value * * args, V if (i != v2->attrs()->end()) prefix = state.forceStringNoCtx(*i->value, pos, "while evaluating the `prefix` attribute of an element of the list passed to builtins.findFile"); - i = getAttr(state, state.sPath, v2->attrs(), "in an element of the __nixPath"); + i = state.getAttr(state.sPath, v2->attrs(), "in an element of the __nixPath"); NixStringContext context; auto path = state.coerceToString(pos, *i->value, context, @@ -1934,7 +1923,7 @@ static void prim_findFile(EvalState & state, const PosIdx pos, Value * * args, V try { auto rewrites = state.realiseContext(context); - path = rewriteStrings(path, rewrites); + path = rewriteStrings(std::move(path), rewrites); } catch (InvalidPathError & e) { state.error( "cannot find '%1%', since path '%2%' is not valid", @@ -1944,8 +1933,8 @@ static void prim_findFile(EvalState & state, const PosIdx pos, Value * * args, V } lookupPath.elements.emplace_back(LookupPath::Elem { - .prefix = LookupPath::Prefix { .s = prefix }, - .path = LookupPath::Path { .s = path }, + .prefix = LookupPath::Prefix { .s = std::move(prefix) }, + .path = LookupPath::Path { .s = std::move(path) }, }); } @@ -2218,7 +2207,7 @@ static RegisterPrimOp primop_outputOf({ [input placeholder string](@docroot@/store/derivation/index.md#input-placeholder) if needed. - If the derivation has a statically-known output path (i.e. the derivation output is input-addressed, or fixed content-addresed), the output path is returned. + If the derivation has a statically-known output path (i.e. the derivation output is input-addressed, or fixed content-addressed), the output path is returned. But if the derivation is content-addressed or if the derivation is itself not-statically produced (i.e. is the output of another derivation), an input placeholder is returned instead. *`derivation reference`* must be a string that may contain a regular store path to a derivation, or may be an input placeholder reference. @@ -2410,7 +2399,7 @@ static RegisterPrimOp primop_fromJSON({ static void prim_toFile(EvalState & state, const PosIdx pos, Value * * args, Value & v) { NixStringContext context; - std::string name(state.forceStringNoCtx(*args[0], pos, "while evaluating the first argument passed to builtins.toFile")); + auto name = state.forceStringNoCtx(*args[0], pos, "while evaluating the first argument passed to builtins.toFile"); std::string contents(state.forceString(*args[1], context, pos, "while evaluating the second argument passed to builtins.toFile")); StorePathSet refs; @@ -2591,6 +2580,7 @@ static void addPath( if (!expectedHash || !state.store->isValidPath(*expectedStorePath)) { // FIXME: make this lazy? auto dstPath = fetchToStore( + state.fetchSettings, *state.store, path.resolveSymlinks(), settings.readOnlyMode ? FetchMode::DryRun : FetchMode::Copy, @@ -2636,7 +2626,7 @@ static RegisterPrimOp primop_filterSource({ > the name of the input directory. Since `` depends on the > unfiltered directory, the name of the output directory > indirectly depends on files that are filtered out by the - > function. This triggers a rebuild even when a filtered-out + > function. This triggers a rebuild even when a filtered out > file is changed. Use `builtins.path` instead, which allows > specifying the name of the output directory. @@ -2681,7 +2671,7 @@ static RegisterPrimOp primop_filterSource({ static void prim_path(EvalState & state, const PosIdx pos, Value * * args, Value & v) { std::optional path; - std::string name; + std::string_view name; Value * filterFun = nullptr; auto method = ContentAddressMethod::Raw::NixArchive; std::optional expectedHash; @@ -2769,7 +2759,7 @@ static void prim_attrNames(EvalState & state, const PosIdx pos, Value * * args, auto list = state.buildList(args[0]->attrs()->size()); for (const auto & [n, i] : enumerate(*args[0]->attrs())) - (list[n] = state.allocValue())->mkString(state.symbols[i.name]); + list[n] = Value::toPtr(state.symbols[i.name]); std::sort(list.begin(), list.end(), [](Value * v1, Value * v2) { return strcmp(v1->c_str(), v2->c_str()) < 0; }); @@ -2827,8 +2817,7 @@ void prim_getAttr(EvalState & state, const PosIdx pos, Value * * args, Value & v { auto attr = state.forceStringNoCtx(*args[0], pos, "while evaluating the first argument passed to builtins.getAttr"); state.forceAttrs(*args[1], pos, "while evaluating the second argument passed to builtins.getAttr"); - auto i = getAttr( - state, + auto i = state.getAttr( state.symbols.create(attr), args[1]->attrs(), "in the attribute set under consideration" @@ -2875,7 +2864,7 @@ static RegisterPrimOp primop_unsafeGetAttrPos(PrimOp { .fun = prim_unsafeGetAttrPos, }); -// access to exact position information (ie, line and colum numbers) is deferred +// access to exact position information (ie, line and column numbers) is deferred // due to the cost associated with calculating that information and how rarely // it is used in practice. this is achieved by creating thunks to otherwise // inaccessible primops that are not exposed as __op or under builtins to turn @@ -2887,7 +2876,7 @@ static RegisterPrimOp primop_unsafeGetAttrPos(PrimOp { // but each type of thunk has an associated runtime cost in the current evaluator. // as with black holes this cost is too high to justify another thunk type to check // for in the very hot path that is forceValue. -static struct LazyPosAcessors { +static struct LazyPosAccessors { PrimOp primop_lineOfPos{ .arity = 1, .fun = [] (EvalState & state, PosIdx pos, Value * * args, Value & v) { @@ -2903,7 +2892,7 @@ static struct LazyPosAcessors { Value lineOfPos, columnOfPos; - LazyPosAcessors() + LazyPosAccessors() { lineOfPos.mkPrimOp(&primop_lineOfPos); columnOfPos.mkPrimOp(&primop_columnOfPos); @@ -2969,7 +2958,7 @@ static void prim_removeAttrs(EvalState & state, const PosIdx pos, Value * * args // 64: large enough to fit the attributes of a derivation boost::container::small_vector names; names.reserve(args[1]->listSize()); - for (auto elem : args[1]->listItems()) { + for (auto elem : args[1]->listView()) { state.forceStringNoCtx(*elem, pos, "while evaluating the values of the second argument passed to builtins.removeAttrs"); names.emplace_back(state.symbols.create(elem->string_view()), nullptr); } @@ -3011,25 +3000,48 @@ static void prim_listToAttrs(EvalState & state, const PosIdx pos, Value * * args { state.forceList(*args[0], pos, "while evaluating the argument passed to builtins.listToAttrs"); - auto attrs = state.buildBindings(args[0]->listSize()); + // Step 1. Sort the name-value attrsets in place using the memory we allocate for the result + auto listView = args[0]->listView(); + size_t listSize = listView.size(); + auto & bindings = *state.allocBindings(listSize); + using ElemPtr = decltype(&bindings[0].value); - std::set seen; - - for (auto v2 : args[0]->listItems()) { + for (const auto & [n, v2] : enumerate(listView)) { state.forceAttrs(*v2, pos, "while evaluating an element of the list passed to builtins.listToAttrs"); - auto j = getAttr(state, state.sName, v2->attrs(), "in a {name=...; value=...;} pair"); + auto j = state.getAttr(state.sName, v2->attrs(), "in a {name=...; value=...;} pair"); auto name = state.forceStringNoCtx(*j->value, j->pos, "while evaluating the `name` attribute of an element of the list passed to builtins.listToAttrs"); - auto sym = state.symbols.create(name); - if (seen.insert(sym).second) { - auto j2 = getAttr(state, state.sValue, v2->attrs(), "in a {name=...; value=...;} pair"); - attrs.insert(sym, j2->value, j2->pos); - } + + // (ab)use Attr to store a Value * * instead of a Value *, so that we can stabilize the sort using the Value * * + bindings[n] = Attr(sym, std::bit_cast(&v2)); } - v.mkAttrs(attrs); + std::sort(&bindings[0], &bindings[listSize], [](const Attr & a, const Attr & b) { + // Note that .value is actually a Value * * that corresponds to the position in the list + return a < b || (!(a > b) && std::bit_cast(a.value) < std::bit_cast(b.value)); + }); + + // Step 2. Unpack the bindings in place and skip name-value pairs with duplicate names + Symbol prev; + for (size_t n = 0; n < listSize; n++) { + auto attr = bindings[n]; + if (prev == attr.name) { + continue; + } + // Note that .value is actually a Value * *; see earlier comments + Value * v2 = *std::bit_cast(attr.value); + + auto j = state.getAttr(state.sValue, v2->attrs(), "in a {name=...; value=...;} pair"); + prev = attr.name; + bindings.push_back({prev, j->value, j->pos}); + } + // help GC and clear end of allocated array + for (size_t n = bindings.size(); n < listSize; n++) { + bindings[n] = Attr{}; + } + v.mkAttrs(&bindings); } static RegisterPrimOp primop_listToAttrs({ @@ -3149,14 +3161,14 @@ static void prim_catAttrs(EvalState & state, const PosIdx pos, Value * * args, V SmallValueVector res(args[1]->listSize()); size_t found = 0; - for (auto v2 : args[1]->listItems()) { + for (auto v2 : args[1]->listView()) { state.forceAttrs(*v2, pos, "while evaluating an element in the list passed as second argument to builtins.catAttrs"); if (auto i = v2->attrs()->get(attrName)) res[found++] = i->value; } auto list = state.buildList(found); - for (unsigned int n = 0; n < found; ++n) + for (size_t n = 0; n < found; ++n) list[n] = res[n]; v.mkList(list); } @@ -3188,15 +3200,21 @@ static void prim_functionArgs(EvalState & state, const PosIdx pos, Value * * arg if (!args[0]->isLambda()) state.error("'functionArgs' requires a function").atPos(pos).debugThrow(); - if (!args[0]->payload.lambda.fun->hasFormals()) { + if (!args[0]->lambda().fun->hasFormals()) { v.mkAttrs(&state.emptyBindings); return; } - auto attrs = state.buildBindings(args[0]->payload.lambda.fun->formals->formals.size()); - for (auto & i : args[0]->payload.lambda.fun->formals->formals) + const auto &formals = args[0]->lambda().fun->formals->formals; + auto attrs = state.buildBindings(formals.size()); + for (auto & i : formals) attrs.insert(i.name, state.getBool(i.def), i.pos); - v.mkAttrs(attrs); + /* Optimization: avoid sorting bindings. `formals` must already be sorted according to + (std::tie(a.name, a.pos) < std::tie(b.name, b.pos)) predicate, so the following assertion + always holds: + assert(std::is_sorted(attrs.alreadySorted()->begin(), attrs.alreadySorted()->end())); + .*/ + v.mkAttrs(attrs.alreadySorted()); } static RegisterPrimOp primop_functionArgs({ @@ -3224,9 +3242,8 @@ static void prim_mapAttrs(EvalState & state, const PosIdx pos, Value * * args, V auto attrs = state.buildBindings(args[1]->attrs()->size()); for (auto & i : *args[1]->attrs()) { - Value * vName = state.allocValue(); + Value * vName = Value::toPtr(state.symbols[i.name]); Value * vFun2 = state.allocValue(); - vName->mkString(state.symbols[i.name]); vFun2->mkApp(args[0], vName); attrs.alloc(i.name).mkApp(vFun2, i.value); } @@ -3269,7 +3286,7 @@ static void prim_zipAttrsWith(EvalState & state, const PosIdx pos, Value * * arg state.forceFunction(*args[0], pos, "while evaluating the first argument passed to builtins.zipAttrsWith"); state.forceList(*args[1], pos, "while evaluating the second argument passed to builtins.zipAttrsWith"); - const auto listItems = args[1]->listItems(); + const auto listItems = args[1]->listView(); for (auto & vElem : listItems) { state.forceAttrs(*vElem, noPos, "while evaluating a value of the list passed as second argument to builtins.zipAttrsWith"); @@ -3290,8 +3307,7 @@ static void prim_zipAttrsWith(EvalState & state, const PosIdx pos, Value * * arg auto attrs = state.buildBindings(attrsSeen.size()); for (auto & [sym, elem] : attrsSeen) { - auto name = state.allocValue(); - name->mkString(state.symbols[sym]); + auto name = Value::toPtr(state.symbols[sym]); auto call1 = state.allocValue(); call1->mkApp(args[0], name); auto call2 = state.allocValue(); @@ -3363,14 +3379,14 @@ static void prim_elemAt(EvalState & state, const PosIdx pos, Value * * args, Val { NixInt::Inner n = state.forceInt(*args[1], pos, "while evaluating the second argument passed to 'builtins.elemAt'").value; state.forceList(*args[0], pos, "while evaluating the first argument passed to 'builtins.elemAt'"); - if (n < 0 || (unsigned int) n >= args[0]->listSize()) + if (n < 0 || std::make_unsigned_t(n) >= args[0]->listSize()) state.error( "'builtins.elemAt' called with index %d on a list of size %d", n, args[0]->listSize() ).atPos(pos).debugThrow(); - state.forceValue(*args[0]->listElems()[n], pos); - v = *args[0]->listElems()[n]; + state.forceValue(*args[0]->listView()[n], pos); + v = *args[0]->listView()[n]; } static RegisterPrimOp primop_elemAt({ @@ -3391,8 +3407,8 @@ static void prim_head(EvalState & state, const PosIdx pos, Value * * args, Value state.error( "'builtins.head' called on an empty list" ).atPos(pos).debugThrow(); - state.forceValue(*args[0]->listElems()[0], pos); - v = *args[0]->listElems()[0]; + state.forceValue(*args[0]->listView()[0], pos); + v = *args[0]->listView()[0]; } static RegisterPrimOp primop_head({ @@ -3417,7 +3433,7 @@ static void prim_tail(EvalState & state, const PosIdx pos, Value * * args, Value auto list = state.buildList(args[0]->listSize() - 1); for (const auto & [n, v] : enumerate(list)) - v = args[0]->listElems()[n + 1]; + v = args[0]->listView()[n + 1]; v.mkList(list); } @@ -3452,7 +3468,7 @@ static void prim_map(EvalState & state, const PosIdx pos, Value * * args, Value auto list = state.buildList(args[1]->listSize()); for (const auto & [n, v] : enumerate(list)) (v = state.allocValue())->mkApp( - args[0], args[1]->listElems()[n]); + args[0], args[1]->listView()[n]); v.mkList(list); } @@ -3486,15 +3502,16 @@ static void prim_filter(EvalState & state, const PosIdx pos, Value * * args, Val state.forceFunction(*args[0], pos, "while evaluating the first argument passed to builtins.filter"); - SmallValueVector vs(args[1]->listSize()); + auto len = args[1]->listSize(); + SmallValueVector vs(len); size_t k = 0; bool same = true; - for (unsigned int n = 0; n < args[1]->listSize(); ++n) { + for (size_t n = 0; n < len; ++n) { Value res; - state.callFunction(*args[0], *args[1]->listElems()[n], res, noPos); + state.callFunction(*args[0], *args[1]->listView()[n], res, noPos); if (state.forceBool(res, pos, "while evaluating the return value of the filtering function passed to builtins.filter")) - vs[k++] = args[1]->listElems()[n]; + vs[k++] = args[1]->listView()[n]; else same = false; } @@ -3523,7 +3540,7 @@ static void prim_elem(EvalState & state, const PosIdx pos, Value * * args, Value { bool res = false; state.forceList(*args[1], pos, "while evaluating the second argument passed to builtins.elem"); - for (auto elem : args[1]->listItems()) + for (auto elem : args[1]->listView()) if (state.eqValues(*args[0], *elem, pos, "while searching for the presence of the given element in the list")) { res = true; break; @@ -3545,7 +3562,8 @@ static RegisterPrimOp primop_elem({ static void prim_concatLists(EvalState & state, const PosIdx pos, Value * * args, Value & v) { state.forceList(*args[0], pos, "while evaluating the first argument passed to builtins.concatLists"); - state.concatLists(v, args[0]->listSize(), args[0]->listElems(), pos, "while evaluating a value of the list passed to builtins.concatLists"); + auto listView = args[0]->listView(); + state.concatLists(v, args[0]->listSize(), listView.data(), pos, "while evaluating a value of the list passed to builtins.concatLists"); } static RegisterPrimOp primop_concatLists({ @@ -3583,7 +3601,8 @@ static void prim_foldlStrict(EvalState & state, const PosIdx pos, Value * * args if (args[2]->listSize()) { Value * vCur = args[1]; - for (auto [n, elem] : enumerate(args[2]->listItems())) { + auto listView = args[2]->listView(); + for (auto [n, elem] : enumerate(listView)) { Value * vs []{vCur, elem}; vCur = n == args[2]->listSize() - 1 ? &v : state.allocValue(); state.callFunction(*args[0], vs, *vCur, pos); @@ -3625,7 +3644,7 @@ static void anyOrAll(bool any, EvalState & state, const PosIdx pos, Value * * ar : "while evaluating the return value of the function passed to builtins.all"; Value vTmp; - for (auto elem : args[1]->listItems()) { + for (auto elem : args[1]->listView()) { state.callFunction(*args[0], *elem, vTmp, pos); bool res = state.forceBool(vTmp, pos, errorCtx); if (res == any) { @@ -3672,12 +3691,12 @@ static void prim_genList(EvalState & state, const PosIdx pos, Value * * args, Va { auto len_ = state.forceInt(*args[1], pos, "while evaluating the second argument passed to builtins.genList").value; - if (len_ < 0) + if (len_ < 0 || std::make_unsigned_t(len_) > std::numeric_limits::max()) state.error("cannot create list of size %1%", len_).atPos(pos).debugThrow(); size_t len = size_t(len_); - // More strict than striclty (!) necessary, but acceptable + // More strict than strictly (!) necessary, but acceptable // as evaluating map without accessing any values makes little sense. state.forceFunction(*args[0], noPos, "while evaluating the first argument passed to builtins.genList"); @@ -3723,7 +3742,7 @@ static void prim_sort(EvalState & state, const PosIdx pos, Value * * args, Value auto list = state.buildList(len); for (const auto & [n, v] : enumerate(list)) - state.forceValue(*(v = args[1]->listElems()[n]), pos); + state.forceValue(*(v = args[1]->listView()[n]), pos); auto comparator = [&](Value * a, Value * b) { /* Optimization: if the comparator is lessThan, bypass @@ -3740,10 +3759,14 @@ static void prim_sort(EvalState & state, const PosIdx pos, Value * * args, Value return state.forceBool(vBool, pos, "while evaluating the return value of the sorting function passed to builtins.sort"); }; - /* FIXME: std::sort can segfault if the comparator is not a strict - weak ordering. What to do? std::stable_sort() seems more - resilient, but no guarantees... */ - std::stable_sort(list.begin(), list.end(), comparator); + /* NOTE: Using custom implementation because std::sort and std::stable_sort + are not resilient to comparators that violate strict weak ordering. Diagnosing + incorrect implementations is a O(n^3) problem, so doing the checks is much more + expensive that doing the sorting. For this reason we choose to use sorting algorithms + that are can't be broken by invalid comprators. peeksort (mergesort) + doesn't misbehave when any of the strict weak order properties is + violated - output is always a reordering of the input. */ + peeksort(list.begin(), list.end(), comparator); v.mkList(list); } @@ -3765,6 +3788,32 @@ static RegisterPrimOp primop_sort({ This is a stable sort: it preserves the relative order of elements deemed equal by the comparator. + + *comparator* must impose a strict weak ordering on the set of values + in the *list*. This means that for any elements *a*, *b* and *c* from the + *list*, *comparator* must satisfy the following relations: + + 1. Transitivity + + ```nix + comparator a b && comparator b c -> comparator a c + ``` + + 1. Irreflexivity + + ```nix + comparator a a == false + ``` + + 1. Transitivity of equivalence + + ```nix + let equiv = a: b: (!comparator a b && !comparator b a); in + equiv a b && equiv b c -> equiv a c + ``` + + If the *comparator* violates any of these properties, then `builtins.sort` + reorders elements in an unspecified manner. )", .fun = prim_sort, }); @@ -3778,8 +3827,8 @@ static void prim_partition(EvalState & state, const PosIdx pos, Value * * args, ValueVector right, wrong; - for (unsigned int n = 0; n < len; ++n) { - auto vElem = args[1]->listElems()[n]; + for (size_t n = 0; n < len; ++n) { + auto vElem = args[1]->listView()[n]; state.forceValue(*vElem, pos); Value res; state.callFunction(*args[0], *vElem, res, pos); @@ -3836,7 +3885,7 @@ static void prim_groupBy(EvalState & state, const PosIdx pos, Value * * args, Va ValueVectorMap attrs; - for (auto vElem : args[1]->listItems()) { + for (auto vElem : args[1]->listView()) { Value res; state.callFunction(*args[0], *vElem, res, pos); auto name = state.forceStringNoCtx(res, pos, "while evaluating the return value of the grouping function passed to builtins.groupBy"); @@ -3891,8 +3940,8 @@ static void prim_concatMap(EvalState & state, const PosIdx pos, Value * * args, SmallTemporaryValueVector lists(nrLists); size_t len = 0; - for (unsigned int n = 0; n < nrLists; ++n) { - Value * vElem = args[1]->listElems()[n]; + for (size_t n = 0; n < nrLists; ++n) { + Value * vElem = args[1]->listView()[n]; state.callFunction(*args[0], *vElem, lists[n], pos); state.forceList(lists[n], lists[n].determinePos(args[0]->determinePos(pos)), "while evaluating the return value of the function passed to builtins.concatMap"); len += lists[n].listSize(); @@ -3900,10 +3949,11 @@ static void prim_concatMap(EvalState & state, const PosIdx pos, Value * * args, auto list = state.buildList(len); auto out = list.elems; - for (unsigned int n = 0, pos = 0; n < nrLists; ++n) { - auto l = lists[n].listSize(); + for (size_t n = 0, pos = 0; n < nrLists; ++n) { + auto listView = lists[n].listView(); + auto l = listView.size(); if (l) - memcpy(out + pos, lists[n].listElems(), l * sizeof(Value *)); + memcpy(out + pos, listView.data(), l * sizeof(Value *)); pos += l; } v.mkList(list); @@ -4165,22 +4215,20 @@ static RegisterPrimOp primop_toString({ non-negative. */ static void prim_substring(EvalState & state, const PosIdx pos, Value * * args, Value & v) { + using NixUInt = std::make_unsigned_t; NixInt::Inner start = state.forceInt(*args[0], pos, "while evaluating the first argument (the start offset) passed to builtins.substring").value; if (start < 0) state.error("negative start position in 'substring'").atPos(pos).debugThrow(); - NixInt::Inner len = state.forceInt(*args[1], pos, "while evaluating the second argument (the substring length) passed to builtins.substring").value; // Negative length may be idiomatically passed to builtins.substring to get // the tail of the string. - if (len < 0) { - len = std::numeric_limits::max(); - } + auto _len = std::numeric_limits::max(); // Special-case on empty substring to avoid O(n) strlen - // This allows for the use of empty substrings to efficently capture string context + // This allows for the use of empty substrings to efficiently capture string context if (len == 0) { state.forceValue(*args[2], pos); if (args[2]->type() == nString) { @@ -4189,10 +4237,14 @@ static void prim_substring(EvalState & state, const PosIdx pos, Value * * args, } } + if (len >= 0 && NixUInt(len) < _len) { + _len = len; + } + NixStringContext context; auto s = state.coerceToString(pos, *args[2], context, "while evaluating the third argument (the string) passed to builtins.substring"); - v.mkString((unsigned int) start >= s->size() ? "" : s->substr(start, len), context); + v.mkString(NixUInt(start) >= s->size() ? "" : s->substr(start, _len), context); } static RegisterPrimOp primop_substring({ @@ -4263,7 +4315,7 @@ static void prim_convertHash(EvalState & state, const PosIdx pos, Value * * args state.forceAttrs(*args[0], pos, "while evaluating the first argument passed to builtins.convertHash"); auto inputAttrs = args[0]->attrs(); - auto iteratorHash = getAttr(state, state.symbols.create("hash"), inputAttrs, "while locating the attribute 'hash'"); + auto iteratorHash = state.getAttr(state.symbols.create("hash"), inputAttrs, "while locating the attribute 'hash'"); auto hash = state.forceStringNoCtx(*iteratorHash->value, pos, "while evaluating the attribute 'hash'"); auto iteratorHashAlgo = inputAttrs->get(state.symbols.create("hashAlgo")); @@ -4271,7 +4323,7 @@ static void prim_convertHash(EvalState & state, const PosIdx pos, Value * * args if (iteratorHashAlgo) ha = parseHashAlgo(state.forceStringNoCtx(*iteratorHashAlgo->value, pos, "while evaluating the attribute 'hashAlgo'")); - auto iteratorToHashFormat = getAttr(state, state.symbols.create("toHashFormat"), args[0]->attrs(), "while locating the attribute 'toHashFormat'"); + auto iteratorToHashFormat = state.getAttr(state.symbols.create("toHashFormat"), args[0]->attrs(), "while locating the attribute 'toHashFormat'"); HashFormat hf = parseHashFormat(state.forceStringNoCtx(*iteratorToHashFormat->value, pos, "while evaluating the attribute 'toHashFormat'")); v.mkString(Hash::parseAny(hash, ha).to_string(hf, hf == HashFormat::SRI)); @@ -4496,7 +4548,7 @@ void prim_split(EvalState & state, const PosIdx pos, Value * * args, Value & v) // Add a list for matched substrings. const size_t slen = match.size() - 1; - // Start at 1, beacause the first match is the whole string. + // Start at 1, because the first match is the whole string. auto list2 = state.buildList(slen); for (const auto & [si, v2] : enumerate(list2)) { if (!match[si + 1].matched) @@ -4577,7 +4629,7 @@ static void prim_concatStringsSep(EvalState & state, const PosIdx pos, Value * * res.reserve((args[1]->listSize() + 32) * sep.size()); bool first = true; - for (auto elem : args[1]->listItems()) { + for (auto elem : args[1]->listView()) { if (first) first = false; else res += sep; res += *state.coerceToString(pos, *elem, context, "while evaluating one element of the list of strings to concat passed to builtins.concatStringsSep"); } @@ -4605,13 +4657,13 @@ static void prim_replaceStrings(EvalState & state, const PosIdx pos, Value * * a "'from' and 'to' arguments passed to builtins.replaceStrings have different lengths" ).atPos(pos).debugThrow(); - std::vector from; + std::vector from; from.reserve(args[0]->listSize()); - for (auto elem : args[0]->listItems()) + for (auto elem : args[0]->listView()) from.emplace_back(state.forceString(*elem, pos, "while evaluating one of the strings to replace passed to builtins.replaceStrings")); - std::unordered_map cache; - auto to = args[1]->listItems(); + std::unordered_map cache; + auto to = args[1]->listView(); NixStringContext context; auto s = state.forceString(*args[2], context, pos, "while evaluating the third argument passed to builtins.replaceStrings"); @@ -4864,7 +4916,7 @@ void EvalState::createBaseEnv(const EvalSettings & evalSettings) 1683705525 ``` - The [store path](@docroot@/store/store-path.md) of a derivation depending on `currentTime` differs for each evaluation unless both evaluate `builtins.currentTime` in the same second. + The [store path](@docroot@/store/store-path.md) of a derivation depending on `currentTime` differs for each evaluation, unless both evaluate `builtins.currentTime` in the same second. )", .impureOnly = true, }); @@ -5040,7 +5092,7 @@ void EvalState::createBaseEnv(const EvalSettings & evalSettings) /* Now that we've added all primops, sort the `builtins' set, because attribute lookups expect it to be sorted. */ - getBuiltins().payload.attrs->sort(); + const_cast(getBuiltins().attrs())->sort(); staticBaseEnv->sort(); diff --git a/src/libexpr/primops/context.cc b/src/libexpr/primops/context.cc index 7145353b0..f90a649d9 100644 --- a/src/libexpr/primops/context.cc +++ b/src/libexpr/primops/context.cc @@ -332,7 +332,7 @@ static void prim_appendContext(EvalState & state, const PosIdx pos, Value * * ar name ).atPos(i.pos).debugThrow(); } - for (auto elem : attr->value->listItems()) { + for (auto elem : attr->value->listView()) { auto outputName = state.forceStringNoCtx(*elem, attr->pos, "while evaluating an output name within a string context"); context.emplace(NixStringContextElem::Built { .drvPath = makeConstantStorePathRef(namePath), diff --git a/src/libexpr/primops/fetchClosure.cc b/src/libexpr/primops/fetchClosure.cc index ea6145f6f..fb17d71b9 100644 --- a/src/libexpr/primops/fetchClosure.cc +++ b/src/libexpr/primops/fetchClosure.cc @@ -129,7 +129,7 @@ static void prim_fetchClosure(EvalState & state, const PosIdx pos, Value * * arg if (attrName == "fromPath") { NixStringContext context; - fromPath = state.coerceToStorePath(attr.pos, *attr.value, context, attrHint()); + fromPath = state.coerceToStorePath(attr.pos, *attr.value, context, attrHint()); // FIXME: overflow } else if (attrName == "toPath") { diff --git a/src/libexpr/primops/fetchTree.cc b/src/libexpr/primops/fetchTree.cc index 38eac6a8a..d8efa1e8a 100644 --- a/src/libexpr/primops/fetchTree.cc +++ b/src/libexpr/primops/fetchTree.cc @@ -303,7 +303,7 @@ static RegisterPrimOp primop_fetchTree({ - `"tarball"` Download a tar archive and extract it into the Nix store. - This has the same underyling implementation as [`builtins.fetchTarball`](@docroot@/language/builtins.md#builtins-fetchTarball) + This has the same underlying implementation as [`builtins.fetchTarball`](@docroot@/language/builtins.md#builtins-fetchTarball) - `url` (String, required) @@ -533,11 +533,12 @@ static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v auto storePath = unpack ? fetchToStore( + state.fetchSettings, *state.store, fetchers::downloadTarball(state.store, state.fetchSettings, *url), FetchMode::Copy, name) - : fetchers::downloadFile(state.store, *url, name).storePath; + : fetchers::downloadFile(state.store, state.fetchSettings, *url, name).storePath; if (expectedHash) { auto hash = unpack diff --git a/src/libexpr/primops/meson.build b/src/libexpr/primops/meson.build index f910fe237..b8abc6409 100644 --- a/src/libexpr/primops/meson.build +++ b/src/libexpr/primops/meson.build @@ -1,6 +1,6 @@ generated_headers += gen_header.process( 'derivation.nix', - preserve_path_from: meson.project_source_root(), + preserve_path_from : meson.project_source_root(), ) sources += files( diff --git a/src/libexpr/print-ambiguous.cc b/src/libexpr/print-ambiguous.cc index e5bfe3ccd..e966b3f02 100644 --- a/src/libexpr/print-ambiguous.cc +++ b/src/libexpr/print-ambiguous.cc @@ -54,11 +54,13 @@ void printAmbiguous( break; } case nList: - if (seen && v.listSize() && !seen->insert(v.listElems()).second) + /* Use pointer to the Value instead of pointer to the elements, because + that would need to explicitly handle the case of SmallList. */ + if (seen && v.listSize() && !seen->insert(&v).second) str << "«repeated»"; else { str << "[ "; - for (auto v2 : v.listItems()) { + for (auto v2 : v.listView()) { if (v2) printAmbiguous(state, *v2, str, seen, depth - 1); else diff --git a/src/libexpr/print.cc b/src/libexpr/print.cc index 2badbb1bb..0aaa6b8b0 100644 --- a/src/libexpr/print.cc +++ b/src/libexpr/print.cc @@ -419,8 +419,8 @@ private: if (depth < options.maxDepth) { increaseIndent(); output << "["; - auto listItems = v.listItems(); - auto prettyPrint = shouldPrettyPrintList(listItems); + auto listItems = v.listView(); + auto prettyPrint = shouldPrettyPrintList(listItems.span()); size_t currentListItemsPrinted = 0; @@ -457,13 +457,13 @@ private: if (v.isLambda()) { output << "lambda"; - if (v.payload.lambda.fun) { - if (v.payload.lambda.fun->name) { - output << " " << state.symbols[v.payload.lambda.fun->name]; + if (v.lambda().fun) { + if (v.lambda().fun->name) { + output << " " << state.symbols[v.lambda().fun->name]; } std::ostringstream s; - s << state.positions[v.payload.lambda.fun->pos]; + s << state.positions[v.lambda().fun->pos]; output << " @ " << filterANSIEscapes(toView(s)); } } else if (v.isPrimOp()) { diff --git a/src/libexpr/value-to-json.cc b/src/libexpr/value-to-json.cc index e05d52693..ba98dd666 100644 --- a/src/libexpr/value-to-json.cc +++ b/src/libexpr/value-to-json.cc @@ -74,7 +74,7 @@ json printValueAsJSON(EvalState & state, bool strict, case nList: { out = json::array(); int i = 0; - for (auto elem : v.listItems()) { + for (auto elem : v.listView()) { try { out.push_back(printValueAsJSON(state, strict, *elem, pos, context, copyToStore)); } catch (Error & e) { diff --git a/src/libexpr/value-to-xml.cc b/src/libexpr/value-to-xml.cc index e26fff71b..235ef2627 100644 --- a/src/libexpr/value-to-xml.cc +++ b/src/libexpr/value-to-xml.cc @@ -114,7 +114,7 @@ static void printValueAsXML(EvalState & state, bool strict, bool location, case nList: { XMLOpenElement _(doc, "list"); - for (auto v2 : v.listItems()) + for (auto v2 : v.listView()) printValueAsXML(state, strict, location, *v2, doc, context, drvsSeen, pos); break; } @@ -126,18 +126,18 @@ static void printValueAsXML(EvalState & state, bool strict, bool location, break; } XMLAttrs xmlAttrs; - if (location) posToXML(state, xmlAttrs, state.positions[v.payload.lambda.fun->pos]); + if (location) posToXML(state, xmlAttrs, state.positions[v.lambda().fun->pos]); XMLOpenElement _(doc, "function", xmlAttrs); - if (v.payload.lambda.fun->hasFormals()) { + if (v.lambda().fun->hasFormals()) { XMLAttrs attrs; - if (v.payload.lambda.fun->arg) attrs["name"] = state.symbols[v.payload.lambda.fun->arg]; - if (v.payload.lambda.fun->formals->ellipsis) attrs["ellipsis"] = "1"; + if (v.lambda().fun->arg) attrs["name"] = state.symbols[v.lambda().fun->arg]; + if (v.lambda().fun->formals->ellipsis) attrs["ellipsis"] = "1"; XMLOpenElement _(doc, "attrspat", attrs); - for (auto & i : v.payload.lambda.fun->formals->lexicographicOrder(state.symbols)) + for (auto & i : v.lambda().fun->formals->lexicographicOrder(state.symbols)) doc.writeEmptyElement("attr", singletonAttrs("name", state.symbols[i.name])); } else - doc.writeEmptyElement("varpat", singletonAttrs("name", state.symbols[v.payload.lambda.fun->arg])); + doc.writeEmptyElement("varpat", singletonAttrs("name", state.symbols[v.lambda().fun->arg])); break; } diff --git a/src/libfetchers/attrs.cc b/src/libfetchers/attrs.cc index 47f6aa8c5..6808e8af1 100644 --- a/src/libfetchers/attrs.cc +++ b/src/libfetchers/attrs.cc @@ -89,9 +89,9 @@ bool getBoolAttr(const Attrs & attrs, const std::string & name) return *s; } -std::map attrsToQuery(const Attrs & attrs) +StringMap attrsToQuery(const Attrs & attrs) { - std::map query; + StringMap query; for (auto & attr : attrs) { if (auto v = std::get_if(&attr.second)) { query.insert_or_assign(attr.first, fmt("%d", *v)); diff --git a/src/libfetchers/cache.cc b/src/libfetchers/cache.cc index 9e339134b..10c21df7a 100644 --- a/src/libfetchers/cache.cc +++ b/src/libfetchers/cache.cc @@ -1,4 +1,5 @@ #include "nix/fetchers/cache.hh" +#include "nix/fetchers/fetch-settings.hh" #include "nix/util/users.hh" #include "nix/store/sqlite.hh" #include "nix/util/sync.hh" @@ -163,10 +164,12 @@ struct CacheImpl : Cache } }; -ref getCache() +ref Settings::getCache() const { - static auto cache = std::make_shared(); - return ref(cache); + auto cache(_cache.lock()); + if (!*cache) + *cache = std::make_shared(); + return ref(*cache); } } diff --git a/src/libfetchers/fetch-to-store.cc b/src/libfetchers/fetch-to-store.cc index e6b9430a2..d3e416c7f 100644 --- a/src/libfetchers/fetch-to-store.cc +++ b/src/libfetchers/fetch-to-store.cc @@ -1,5 +1,6 @@ #include "nix/fetchers/fetch-to-store.hh" #include "nix/fetchers/fetchers.hh" +#include "nix/fetchers/fetch-settings.hh" namespace nix { @@ -16,6 +17,7 @@ fetchers::Cache::Key makeSourcePathToHashCacheKey( } StorePath fetchToStore( + const fetchers::Settings & settings, Store & store, const SourcePath & path, FetchMode mode, @@ -24,10 +26,11 @@ StorePath fetchToStore( PathFilter * filter, RepairFlag repair) { - return fetchToStore2(store, path, mode, name, method, filter, repair).first; + return fetchToStore2(settings, store, path, mode, name, method, filter, repair).first; } std::pair fetchToStore2( + const fetchers::Settings & settings, Store & store, const SourcePath & path, FetchMode mode, @@ -45,7 +48,7 @@ std::pair fetchToStore2( if (fingerprint) { cacheKey = makeSourcePathToHashCacheKey(*fingerprint, method, subpath.abs()); - if (auto res = fetchers::getCache()->lookup(*cacheKey)) { + if (auto res = settings.getCache()->lookup(*cacheKey)) { auto hash = Hash::parseSRI(fetchers::getStrAttr(*res, "hash")); auto storePath = store.makeFixedOutputPathFromCA(name, ContentAddressWithReferences::fromParts(method, hash, {})); @@ -96,7 +99,7 @@ std::pair fetchToStore2( }); if (cacheKey) - fetchers::getCache()->upsert(*cacheKey, {{"hash", hash.to_string(HashFormat::SRI, true)}}); + settings.getCache()->upsert(*cacheKey, {{"hash", hash.to_string(HashFormat::SRI, true)}}); return {storePath, hash}; } diff --git a/src/libfetchers/fetchers.cc b/src/libfetchers/fetchers.cc index 4d2d66a72..c947d860a 100644 --- a/src/libfetchers/fetchers.cc +++ b/src/libfetchers/fetchers.cc @@ -135,7 +135,7 @@ ParsedURL Input::toURL() const return scheme->toURL(*this); } -std::string Input::toURLString(const std::map & extraQuery) const +std::string Input::toURLString(const StringMap & extraQuery) const { auto url = toURL(); for (auto & attr : extraQuery) @@ -198,7 +198,7 @@ std::tuple, Input> Input::fetchToStore(ref try { auto [accessor, result] = getAccessorUnchecked(store); - auto storePath = nix::fetchToStore(*store, SourcePath(accessor), FetchMode::Copy, result.getName()); + auto storePath = nix::fetchToStore(*settings, *store, SourcePath(accessor), FetchMode::Copy, result.getName()); auto narHash = store->queryPathInfo(storePath)->narHash; result.attrs.insert_or_assign("narHash", narHash.to_string(HashFormat::SRI, true)); diff --git a/src/libfetchers/git-utils.cc b/src/libfetchers/git-utils.cc index 935d328d6..8a10517fa 100644 --- a/src/libfetchers/git-utils.cc +++ b/src/libfetchers/git-utils.cc @@ -1,6 +1,7 @@ #include "nix/fetchers/git-utils.hh" #include "nix/fetchers/git-lfs-fetch.hh" #include "nix/fetchers/cache.hh" +#include "nix/fetchers/fetch-settings.hh" #include "nix/util/finally.hh" #include "nix/util/processes.hh" #include "nix/util/signals.hh" @@ -321,8 +322,17 @@ struct GitRepoImpl : GitRepo, std::enable_shared_from_this for (size_t n = 0; n < git_commit_parentcount(commit->get()); ++n) { git_commit * parent; - if (git_commit_parent(&parent, commit->get(), n)) - throw Error("getting parent of Git commit '%s': %s", *git_commit_id(commit->get()), git_error_last()->message); + if (git_commit_parent(&parent, commit->get(), n)) { + throw Error( + "Failed to retrieve the parent of Git commit '%s': %s. " + "This may be due to an incomplete repository history. " + "To resolve this, either enable the shallow parameter in your flake URL (?shallow=1) " + "or add set the shallow parameter to true in builtins.fetchGit, " + "or fetch the complete history for this branch.", + *git_commit_id(commit->get()), + git_error_last()->message + ); + } todo.push(Commit(parent)); } } @@ -367,7 +377,7 @@ struct GitRepoImpl : GitRepo, std::enable_shared_from_this if (git_config_iterator_glob_new(Setter(it), config.get(), "^submodule\\..*\\.(path|url|branch)$")) throw Error("iterating over .gitmodules: %s", git_error_last()->message); - std::map entries; + StringMap entries; while (true) { git_config_entry * entry = nullptr; @@ -586,7 +596,7 @@ struct GitRepoImpl : GitRepo, std::enable_shared_from_this }); /* Evaluate result through status code and checking if public - key fingerprints appear on stderr. This is neccessary + key fingerprints appear on stderr. This is necessary because the git command might also succeed due to the commit being signed by gpg keys that are present in the users key agent. */ @@ -610,18 +620,18 @@ struct GitRepoImpl : GitRepo, std::enable_shared_from_this throw Error("Commit signature verification on commit %s failed: %s", rev.gitRev(), output); } - Hash treeHashToNarHash(const Hash & treeHash) override + Hash treeHashToNarHash(const fetchers::Settings & settings, const Hash & treeHash) override { auto accessor = getAccessor(treeHash, false, ""); fetchers::Cache::Key cacheKey{"treeHashToNarHash", {{"treeHash", treeHash.gitRev()}}}; - if (auto res = fetchers::getCache()->lookup(cacheKey)) + if (auto res = settings.getCache()->lookup(cacheKey)) return Hash::parseAny(fetchers::getStrAttr(*res, "narHash"), HashAlgorithm::SHA256); auto narHash = accessor->hashPath(CanonPath::root); - fetchers::getCache()->upsert(cacheKey, fetchers::Attrs({{"narHash", narHash.to_string(HashFormat::SRI, true)}})); + settings.getCache()->upsert(cacheKey, fetchers::Attrs({{"narHash", narHash.to_string(HashFormat::SRI, true)}})); return narHash; } @@ -655,28 +665,40 @@ ref GitRepo::openRepo(const std::filesystem::path & path, bool create, struct GitSourceAccessor : SourceAccessor { - ref repo; - Object root; - std::optional lfsFetch = std::nullopt; + struct State + { + ref repo; + Object root; + std::optional lfsFetch = std::nullopt; + }; + + Sync state_; GitSourceAccessor(ref repo_, const Hash & rev, bool smudgeLfs) - : repo(repo_) - , root(peelToTreeOrBlob(lookupObject(*repo, hashToOID(rev)).get())) + : state_{ + State { + .repo = repo_, + .root = peelToTreeOrBlob(lookupObject(*repo_, hashToOID(rev)).get()), + .lfsFetch = smudgeLfs ? std::make_optional(lfs::Fetch(*repo_, hashToOID(rev))) : std::nullopt, + } + } { - if (smudgeLfs) - lfsFetch = std::make_optional(lfs::Fetch(*repo, hashToOID(rev))); } std::string readBlob(const CanonPath & path, bool symlink) { - const auto blob = getBlob(path, symlink); + auto state(state_.lock()); - if (lfsFetch) { - if (lfsFetch->shouldFetch(path)) { + const auto blob = getBlob(*state, path, symlink); + + if (state->lfsFetch) { + if (state->lfsFetch->shouldFetch(path)) { StringSink s; try { + // FIXME: do we need to hold the state lock while + // doing this? auto contents = std::string((const char *) git_blob_rawcontent(blob.get()), git_blob_rawsize(blob.get())); - lfsFetch->fetch(contents, path, s, [&s](uint64_t size){ s.s.reserve(size); }); + state->lfsFetch->fetch(contents, path, s, [&s](uint64_t size){ s.s.reserve(size); }); } catch (Error & e) { e.addTrace({}, "while smudging git-lfs file '%s'", path); throw; @@ -695,15 +717,18 @@ struct GitSourceAccessor : SourceAccessor bool pathExists(const CanonPath & path) override { - return path.isRoot() ? true : (bool) lookup(path); + auto state(state_.lock()); + return path.isRoot() ? true : (bool) lookup(*state, path); } std::optional maybeLstat(const CanonPath & path) override { - if (path.isRoot()) - return Stat { .type = git_object_type(root.get()) == GIT_OBJECT_TREE ? tDirectory : tRegular }; + auto state(state_.lock()); - auto entry = lookup(path); + if (path.isRoot()) + return Stat { .type = git_object_type(state->root.get()) == GIT_OBJECT_TREE ? tDirectory : tRegular }; + + auto entry = lookup(*state, path); if (!entry) return std::nullopt; @@ -731,6 +756,8 @@ struct GitSourceAccessor : SourceAccessor DirEntries readDirectory(const CanonPath & path) override { + auto state(state_.lock()); + return std::visit(overloaded { [&](Tree tree) { DirEntries res; @@ -748,7 +775,7 @@ struct GitSourceAccessor : SourceAccessor [&](Submodule) { return DirEntries(); } - }, getTree(path)); + }, getTree(*state, path)); } std::string readLink(const CanonPath & path) override @@ -762,7 +789,9 @@ struct GitSourceAccessor : SourceAccessor */ std::optional getSubmoduleRev(const CanonPath & path) { - auto entry = lookup(path); + auto state(state_.lock()); + + auto entry = lookup(*state, path); if (!entry || git_tree_entry_type(entry) != GIT_OBJECT_COMMIT) return std::nullopt; @@ -773,7 +802,7 @@ struct GitSourceAccessor : SourceAccessor std::unordered_map lookupCache; /* Recursively look up 'path' relative to the root. */ - git_tree_entry * lookup(const CanonPath & path) + git_tree_entry * lookup(State & state, const CanonPath & path) { auto i = lookupCache.find(path); if (i != lookupCache.end()) return i->second.get(); @@ -783,7 +812,7 @@ struct GitSourceAccessor : SourceAccessor auto name = path.baseName().value(); - auto parentTree = lookupTree(*parent); + auto parentTree = lookupTree(state, *parent); if (!parentTree) return nullptr; auto count = git_tree_entrycount(parentTree->get()); @@ -812,29 +841,29 @@ struct GitSourceAccessor : SourceAccessor return res; } - std::optional lookupTree(const CanonPath & path) + std::optional lookupTree(State & state, const CanonPath & path) { if (path.isRoot()) { - if (git_object_type(root.get()) == GIT_OBJECT_TREE) - return dupObject((git_tree *) &*root); + if (git_object_type(state.root.get()) == GIT_OBJECT_TREE) + return dupObject((git_tree *) &*state.root); else return std::nullopt; } - auto entry = lookup(path); + auto entry = lookup(state, path); if (!entry || git_tree_entry_type(entry) != GIT_OBJECT_TREE) return std::nullopt; Tree tree; - if (git_tree_entry_to_object((git_object * *) (git_tree * *) Setter(tree), *repo, entry)) + if (git_tree_entry_to_object((git_object * *) (git_tree * *) Setter(tree), *state.repo, entry)) throw Error("looking up directory '%s': %s", showPath(path), git_error_last()->message); return tree; } - git_tree_entry * need(const CanonPath & path) + git_tree_entry * need(State & state, const CanonPath & path) { - auto entry = lookup(path); + auto entry = lookup(state, path); if (!entry) throw Error("'%s' does not exist", showPath(path)); return entry; @@ -842,16 +871,16 @@ struct GitSourceAccessor : SourceAccessor struct Submodule { }; - std::variant getTree(const CanonPath & path) + std::variant getTree(State & state, const CanonPath & path) { if (path.isRoot()) { - if (git_object_type(root.get()) == GIT_OBJECT_TREE) - return dupObject((git_tree *) &*root); + if (git_object_type(state.root.get()) == GIT_OBJECT_TREE) + return dupObject((git_tree *) &*state.root); else - throw Error("Git root object '%s' is not a directory", *git_object_id(root.get())); + throw Error("Git root object '%s' is not a directory", *git_object_id(state.root.get())); } - auto entry = need(path); + auto entry = need(state, path); if (git_tree_entry_type(entry) == GIT_OBJECT_COMMIT) return Submodule(); @@ -860,16 +889,16 @@ struct GitSourceAccessor : SourceAccessor throw Error("'%s' is not a directory", showPath(path)); Tree tree; - if (git_tree_entry_to_object((git_object * *) (git_tree * *) Setter(tree), *repo, entry)) + if (git_tree_entry_to_object((git_object * *) (git_tree * *) Setter(tree), *state.repo, entry)) throw Error("looking up directory '%s': %s", showPath(path), git_error_last()->message); return tree; } - Blob getBlob(const CanonPath & path, bool expectSymlink) + Blob getBlob(State & state, const CanonPath & path, bool expectSymlink) { - if (!expectSymlink && git_object_type(root.get()) == GIT_OBJECT_BLOB) - return dupObject((git_blob *) &*root); + if (!expectSymlink && git_object_type(state.root.get()) == GIT_OBJECT_BLOB) + return dupObject((git_blob *) &*state.root); auto notExpected = [&]() { @@ -882,7 +911,7 @@ struct GitSourceAccessor : SourceAccessor if (path.isRoot()) notExpected(); - auto entry = need(path); + auto entry = need(state, path); if (git_tree_entry_type(entry) != GIT_OBJECT_BLOB) notExpected(); @@ -897,7 +926,7 @@ struct GitSourceAccessor : SourceAccessor } Blob blob; - if (git_tree_entry_to_object((git_object * *) (git_blob * *) Setter(blob), *repo, entry)) + if (git_tree_entry_to_object((git_object * *) (git_blob * *) Setter(blob), *state.repo, entry)) throw Error("looking up file '%s': %s", showPath(path), git_error_last()->message); return blob; diff --git a/src/libfetchers/git.cc b/src/libfetchers/git.cc index 4a00d4e34..7b4037941 100644 --- a/src/libfetchers/git.cc +++ b/src/libfetchers/git.cc @@ -481,11 +481,11 @@ struct GitInputScheme : InputScheme return repoInfo; } - uint64_t getLastModified(const RepoInfo & repoInfo, const std::filesystem::path & repoDir, const Hash & rev) const + uint64_t getLastModified(const Settings & settings, const RepoInfo & repoInfo, const std::filesystem::path & repoDir, const Hash & rev) const { Cache::Key key{"gitLastModified", {{"rev", rev.gitRev()}}}; - auto cache = getCache(); + auto cache = settings.getCache(); if (auto res = cache->lookup(key)) return getIntAttr(*res, "lastModified"); @@ -497,11 +497,11 @@ struct GitInputScheme : InputScheme return lastModified; } - uint64_t getRevCount(const RepoInfo & repoInfo, const std::filesystem::path & repoDir, const Hash & rev) const + uint64_t getRevCount(const Settings & settings, const RepoInfo & repoInfo, const std::filesystem::path & repoDir, const Hash & rev) const { Cache::Key key{"gitRevCount", {{"rev", rev.gitRev()}}}; - auto cache = getCache(); + auto cache = settings.getCache(); if (auto revCountAttrs = cache->lookup(key)) return getIntAttr(*revCountAttrs, "revCount"); @@ -679,12 +679,12 @@ struct GitInputScheme : InputScheme Attrs infoAttrs({ {"rev", rev.gitRev()}, - {"lastModified", getLastModified(repoInfo, repoDir, rev)}, + {"lastModified", getLastModified(*input.settings, repoInfo, repoDir, rev)}, }); if (!getShallowAttr(input)) infoAttrs.insert_or_assign("revCount", - getRevCount(repoInfo, repoDir, rev)); + getRevCount(*input.settings, repoInfo, repoDir, rev)); printTalkative("using revision %s of repo '%s'", rev.gitRev(), repoInfo.locationToArg()); @@ -799,8 +799,10 @@ struct GitInputScheme : InputScheme auto rev = repoInfo.workdirInfo.headRev.value_or(nullRev); input.attrs.insert_or_assign("rev", rev.gitRev()); - input.attrs.insert_or_assign("revCount", - rev == nullRev ? 0 : getRevCount(repoInfo, repoPath, rev)); + if (!getShallowAttr(input)) { + input.attrs.insert_or_assign("revCount", + rev == nullRev ? 0 : getRevCount(*input.settings, repoInfo, repoPath, rev)); + } verifyCommit(input, repo); } else { @@ -819,7 +821,7 @@ struct GitInputScheme : InputScheme input.attrs.insert_or_assign( "lastModified", repoInfo.workdirInfo.headRev - ? getLastModified(repoInfo, repoPath, *repoInfo.workdirInfo.headRev) + ? getLastModified(*input.settings, repoInfo, repoPath, *repoInfo.workdirInfo.headRev) : 0); return {accessor, std::move(input)}; diff --git a/src/libfetchers/github.cc b/src/libfetchers/github.cc index fcddb13ed..0888d387c 100644 --- a/src/libfetchers/github.cc +++ b/src/libfetchers/github.cc @@ -175,7 +175,7 @@ struct GitArchiveInputScheme : InputScheme return input; } - // Search for the longest possible match starting from the begining and ending at either the end or a path segment. + // Search for the longest possible match starting from the beginning and ending at either the end or a path segment. std::optional getAccessToken(const fetchers::Settings & settings, const std::string & host, const std::string & url) const override { auto tokens = settings.accessTokens.get(); @@ -265,7 +265,7 @@ struct GitArchiveInputScheme : InputScheme input.attrs.erase("ref"); input.attrs.insert_or_assign("rev", rev->gitRev()); - auto cache = getCache(); + auto cache = input.settings->getCache(); Cache::Key treeHashKey{"gitRevToTreeHash", {{"rev", rev->gitRev()}}}; Cache::Key lastModifiedKey{"gitRevToLastModified", {{"rev", rev->gitRev()}}}; @@ -409,7 +409,7 @@ struct GitHubInputScheme : GitArchiveInputScheme auto json = nlohmann::json::parse( readFile( store->toRealPath( - downloadFile(store, url, "source", headers).storePath))); + downloadFile(store, *input.settings, url, "source", headers).storePath))); return RefInfo { .rev = Hash::parseAny(std::string { json["sha"] }, HashAlgorithm::SHA1), @@ -483,7 +483,7 @@ struct GitLabInputScheme : GitArchiveInputScheme auto json = nlohmann::json::parse( readFile( store->toRealPath( - downloadFile(store, url, "source", headers).storePath))); + downloadFile(store, *input.settings, url, "source", headers).storePath))); if (json.is_array() && json.size() >= 1 && json[0]["id"] != nullptr) { return RefInfo { @@ -553,7 +553,7 @@ struct SourceHutInputScheme : GitArchiveInputScheme std::string refUri; if (ref == "HEAD") { auto file = store->toRealPath( - downloadFile(store, fmt("%s/HEAD", base_url), "source", headers).storePath); + downloadFile(store, *input.settings, fmt("%s/HEAD", base_url), "source", headers).storePath); std::ifstream is(file); std::string line; getline(is, line); @@ -569,7 +569,7 @@ struct SourceHutInputScheme : GitArchiveInputScheme std::regex refRegex(refUri); auto file = store->toRealPath( - downloadFile(store, fmt("%s/info/refs", base_url), "source", headers).storePath); + downloadFile(store, *input.settings, fmt("%s/info/refs", base_url), "source", headers).storePath); std::ifstream is(file); std::string line; diff --git a/src/libfetchers/include/nix/fetchers/attrs.hh b/src/libfetchers/include/nix/fetchers/attrs.hh index 1b757d712..582abd144 100644 --- a/src/libfetchers/include/nix/fetchers/attrs.hh +++ b/src/libfetchers/include/nix/fetchers/attrs.hh @@ -37,7 +37,7 @@ std::optional maybeGetBoolAttr(const Attrs & attrs, const std::string & na bool getBoolAttr(const Attrs & attrs, const std::string & name); -std::map attrsToQuery(const Attrs & attrs); +StringMap attrsToQuery(const Attrs & attrs); Hash getRevAttr(const Attrs & attrs, const std::string & name); diff --git a/src/libfetchers/include/nix/fetchers/cache.hh b/src/libfetchers/include/nix/fetchers/cache.hh index 4be6b2095..3f3089d3f 100644 --- a/src/libfetchers/include/nix/fetchers/cache.hh +++ b/src/libfetchers/include/nix/fetchers/cache.hh @@ -92,6 +92,4 @@ struct Cache Store & store) = 0; }; -ref getCache(); - } diff --git a/src/libfetchers/include/nix/fetchers/fetch-settings.hh b/src/libfetchers/include/nix/fetchers/fetch-settings.hh index e4fe92d5d..b055fd0e9 100644 --- a/src/libfetchers/include/nix/fetchers/fetch-settings.hh +++ b/src/libfetchers/include/nix/fetchers/fetch-settings.hh @@ -3,6 +3,8 @@ #include "nix/util/types.hh" #include "nix/util/configuration.hh" +#include "nix/util/ref.hh" +#include "nix/util/sync.hh" #include #include @@ -11,6 +13,8 @@ namespace nix::fetchers { +struct Cache; + struct Settings : public Config { Settings(); @@ -106,6 +110,11 @@ struct Settings : public Config When empty, disables the global flake registry. )"}; + + ref getCache() const; + +private: + mutable Sync> _cache; }; } diff --git a/src/libfetchers/include/nix/fetchers/fetch-to-store.hh b/src/libfetchers/include/nix/fetchers/fetch-to-store.hh index 364d25375..753bf8c67 100644 --- a/src/libfetchers/include/nix/fetchers/fetch-to-store.hh +++ b/src/libfetchers/include/nix/fetchers/fetch-to-store.hh @@ -15,6 +15,7 @@ enum struct FetchMode { DryRun, Copy }; * Copy the `path` to the Nix store. */ StorePath fetchToStore( + const fetchers::Settings & settings, Store & store, const SourcePath & path, FetchMode mode, @@ -24,6 +25,7 @@ StorePath fetchToStore( RepairFlag repair = NoRepair); std::pair fetchToStore2( + const fetchers::Settings & settings, Store & store, const SourcePath & path, FetchMode mode, diff --git a/src/libfetchers/include/nix/fetchers/fetchers.hh b/src/libfetchers/include/nix/fetchers/fetchers.hh index c2ae647af..cd096b29a 100644 --- a/src/libfetchers/include/nix/fetchers/fetchers.hh +++ b/src/libfetchers/include/nix/fetchers/fetchers.hh @@ -71,7 +71,7 @@ public: ParsedURL toURL() const; - std::string toURLString(const std::map & extraQuery = {}) const; + std::string toURLString(const StringMap & extraQuery = {}) const; std::string to_string() const; diff --git a/src/libfetchers/include/nix/fetchers/git-lfs-fetch.hh b/src/libfetchers/include/nix/fetchers/git-lfs-fetch.hh index e701288cf..b59da391a 100644 --- a/src/libfetchers/include/nix/fetchers/git-lfs-fetch.hh +++ b/src/libfetchers/include/nix/fetchers/git-lfs-fetch.hh @@ -1,3 +1,6 @@ +#pragma once +///@file + #include "nix/util/canon-path.hh" #include "nix/util/serialise.hh" #include "nix/util/url.hh" diff --git a/src/libfetchers/include/nix/fetchers/git-utils.hh b/src/libfetchers/include/nix/fetchers/git-utils.hh index 1506f8509..2926deb4f 100644 --- a/src/libfetchers/include/nix/fetchers/git-utils.hh +++ b/src/libfetchers/include/nix/fetchers/git-utils.hh @@ -5,7 +5,7 @@ namespace nix { -namespace fetchers { struct PublicKey; } +namespace fetchers { struct PublicKey; struct Settings; } /** * A sink that writes into a Git repository. Note that nothing may be written @@ -115,7 +115,7 @@ struct GitRepo * Given a Git tree hash, compute the hash of its NAR * serialisation. This is memoised on-disk. */ - virtual Hash treeHashToNarHash(const Hash & treeHash) = 0; + virtual Hash treeHashToNarHash(const fetchers::Settings & settings, const Hash & treeHash) = 0; /** * If the specified Git object is a directory with a single entry diff --git a/src/libfetchers/include/nix/fetchers/tarball.hh b/src/libfetchers/include/nix/fetchers/tarball.hh index 691142091..2c5ea209f 100644 --- a/src/libfetchers/include/nix/fetchers/tarball.hh +++ b/src/libfetchers/include/nix/fetchers/tarball.hh @@ -26,6 +26,7 @@ struct DownloadFileResult DownloadFileResult downloadFile( ref store, + const Settings & settings, const std::string & url, const std::string & name, const Headers & headers = {}); diff --git a/src/libfetchers/mercurial.cc b/src/libfetchers/mercurial.cc index 74e9fd089..0b63876de 100644 --- a/src/libfetchers/mercurial.cc +++ b/src/libfetchers/mercurial.cc @@ -253,13 +253,13 @@ struct MercurialInputScheme : InputScheme }}; if (!input.getRev()) { - if (auto res = getCache()->lookupWithTTL(refToRevKey)) + if (auto res = input.settings->getCache()->lookupWithTTL(refToRevKey)) input.attrs.insert_or_assign("rev", getRevAttr(*res, "rev").gitRev()); } /* If we have a rev, check if we have a cached store path. */ if (auto rev = input.getRev()) { - if (auto res = getCache()->lookupStorePath(revInfoKey(*rev), *store)) + if (auto res = input.settings->getCache()->lookupStorePath(revInfoKey(*rev), *store)) return makeResult(res->value, res->storePath); } @@ -309,7 +309,7 @@ struct MercurialInputScheme : InputScheme /* Now that we have the rev, check the cache again for a cached store path. */ - if (auto res = getCache()->lookupStorePath(revInfoKey(rev), *store)) + if (auto res = input.settings->getCache()->lookupStorePath(revInfoKey(rev), *store)) return makeResult(res->value, res->storePath); Path tmpDir = createTempDir(); @@ -326,9 +326,9 @@ struct MercurialInputScheme : InputScheme }); if (!origRev) - getCache()->upsert(refToRevKey, {{"rev", rev.gitRev()}}); + input.settings->getCache()->upsert(refToRevKey, {{"rev", rev.gitRev()}}); - getCache()->upsert(revInfoKey(rev), *store, infoAttrs, storePath); + input.settings->getCache()->upsert(revInfoKey(rev), *store, infoAttrs, storePath); return makeResult(infoAttrs, std::move(storePath)); } diff --git a/src/libfetchers/path.cc b/src/libfetchers/path.cc index c199957eb..e9f205543 100644 --- a/src/libfetchers/path.cc +++ b/src/libfetchers/path.cc @@ -4,6 +4,7 @@ #include "nix/fetchers/store-path-accessor.hh" #include "nix/fetchers/cache.hh" #include "nix/fetchers/fetch-to-store.hh" +#include "nix/fetchers/fetch-settings.hh" namespace nix::fetchers { @@ -149,7 +150,7 @@ struct PathInputScheme : InputScheme // store, pre-create an entry in the fetcher cache. auto info = store->queryPathInfo(*storePath); accessor->fingerprint = fmt("path:%s", store->queryPathInfo(*storePath)->narHash.to_string(HashFormat::SRI, true)); - fetchers::getCache()->upsert( + input.settings->getCache()->upsert( makeSourcePathToHashCacheKey(*accessor->fingerprint, ContentAddressMethod::Raw::NixArchive, "/"), {{"hash", info->narHash.to_string(HashFormat::SRI, true)}}); diff --git a/src/libfetchers/registry.cc b/src/libfetchers/registry.cc index bfaf9569a..335935f53 100644 --- a/src/libfetchers/registry.cc +++ b/src/libfetchers/registry.cc @@ -156,7 +156,7 @@ static std::shared_ptr getGlobalRegistry(const Settings & settings, re } if (!isAbsolute(path)) { - auto storePath = downloadFile(store, path, "flake-registry.json").storePath; + auto storePath = downloadFile(store, settings, path, "flake-registry.json").storePath; if (auto store2 = store.dynamic_pointer_cast()) store2->addPermRoot(storePath, getCacheDir() + "/flake-registry.json"); path = store->toRealPath(storePath); diff --git a/src/libfetchers/tarball.cc b/src/libfetchers/tarball.cc index 1bd7e3e59..b0822cc33 100644 --- a/src/libfetchers/tarball.cc +++ b/src/libfetchers/tarball.cc @@ -9,11 +9,13 @@ #include "nix/fetchers/store-path-accessor.hh" #include "nix/store/store-api.hh" #include "nix/fetchers/git-utils.hh" +#include "nix/fetchers/fetch-settings.hh" namespace nix::fetchers { DownloadFileResult downloadFile( ref store, + const Settings & settings, const std::string & url, const std::string & name, const Headers & headers) @@ -25,7 +27,7 @@ DownloadFileResult downloadFile( {"name", name}, }}}; - auto cached = getCache()->lookupStorePath(key, *store); + auto cached = settings.getCache()->lookupStorePath(key, *store); auto useCached = [&]() -> DownloadFileResult { @@ -92,7 +94,7 @@ DownloadFileResult downloadFile( key.second.insert_or_assign("url", url); assert(!res.urls.empty()); infoAttrs.insert_or_assign("url", *res.urls.rbegin()); - getCache()->upsert(key, *store, infoAttrs, *storePath); + settings.getCache()->upsert(key, *store, infoAttrs, *storePath); } return { @@ -104,13 +106,14 @@ DownloadFileResult downloadFile( } static DownloadTarballResult downloadTarball_( + const Settings & settings, const std::string & url, const Headers & headers, const std::string & displayPrefix) { Cache::Key cacheKey{"tarball", {{"url", url}}}; - auto cached = getCache()->lookupExpired(cacheKey); + auto cached = settings.getCache()->lookupExpired(cacheKey); auto attrsToResult = [&](const Attrs & infoAttrs) { @@ -196,7 +199,7 @@ static DownloadTarballResult downloadTarball_( /* Insert a cache entry for every URL in the redirect chain. */ for (auto & url : res->urls) { cacheKey.second.insert_or_assign("url", url); - getCache()->upsert(cacheKey, infoAttrs); + settings.getCache()->upsert(cacheKey, infoAttrs); } // FIXME: add a cache entry for immutableUrl? That could allow @@ -341,7 +344,7 @@ struct FileInputScheme : CurlInputScheme the Nix store directly, since there is little deduplication benefit in using the Git cache for single big files like tarballs. */ - auto file = downloadFile(store, getStrAttr(input.attrs, "url"), input.getName()); + auto file = downloadFile(store, *input.settings, getStrAttr(input.attrs, "url"), input.getName()); auto narHash = store->queryPathInfo(file.storePath)->narHash; input.attrs.insert_or_assign("narHash", narHash.to_string(HashFormat::SRI, true)); @@ -373,6 +376,7 @@ struct TarballInputScheme : CurlInputScheme auto input(_input); auto result = downloadTarball_( + *input.settings, getStrAttr(input.attrs, "url"), {}, "«" + input.to_string() + "»"); @@ -390,7 +394,7 @@ struct TarballInputScheme : CurlInputScheme input.attrs.insert_or_assign("lastModified", uint64_t(result.lastModified)); input.attrs.insert_or_assign("narHash", - getTarballCache()->treeHashToNarHash(result.treeHash).to_string(HashFormat::SRI, true)); + getTarballCache()->treeHashToNarHash(*input.settings, result.treeHash).to_string(HashFormat::SRI, true)); return {result.accessor, input}; } diff --git a/src/libflake-c/nix_api_flake.h b/src/libflake-c/nix_api_flake.h index f5b9dc542..a1a7060a6 100644 --- a/src/libflake-c/nix_api_flake.h +++ b/src/libflake-c/nix_api_flake.h @@ -27,7 +27,7 @@ extern "C" { typedef struct nix_flake_settings nix_flake_settings; /** - * @brief Context and paramaters for parsing a flake reference + * @brief Context and parameters for parsing a flake reference * @see nix_flake_reference_parse_flags_free * @see nix_flake_reference_parse_string */ diff --git a/src/libflake/call-flake.nix b/src/libflake/call-flake.nix index fe326291f..ed7947e06 100644 --- a/src/libflake/call-flake.nix +++ b/src/libflake/call-flake.nix @@ -39,24 +39,16 @@ let allNodes = mapAttrs ( key: node: let + hasOverride = overrides ? ${key}; + isRelative = node.locked.type or null == "path" && builtins.substring 0 1 node.locked.path != "/"; parentNode = allNodes.${getInputByPath lockFile.root node.parent}; - flakeDir = - let - dir = overrides.${key}.dir or node.locked.path or ""; - parentDir = parentNode.flakeDir; - in - if node ? parent then parentDir + ("/" + dir) else dir; - sourceInfo = - if overrides ? ${key} then + if hasOverride then overrides.${key}.sourceInfo - else if node.locked.type == "path" && builtins.substring 0 1 node.locked.path != "/" then + else if isRelative then parentNode.sourceInfo - // { - outPath = parentNode.sourceInfo.outPath + ("/" + flakeDir); - } else # FIXME: remove obsolete node.info. # Note: lock file entries are always final. @@ -64,7 +56,11 @@ let subdir = overrides.${key}.dir or node.locked.dir or ""; - outPath = sourceInfo + ((if subdir == "" then "" else "/") + subdir); + outPath = + if !hasOverride && isRelative then + parentNode.outPath + (if node.locked.path == "" then "" else "/" + node.locked.path) + else + sourceInfo.outPath + (if subdir == "" then "" else "/" + subdir); flake = import (outPath + "/flake.nix"); @@ -99,9 +95,9 @@ let assert builtins.isFunction flake.outputs; result else - sourceInfo; + sourceInfo // { inherit sourceInfo outPath; }; - inherit flakeDir sourceInfo; + inherit outPath sourceInfo; } ) lockFile.nodes; diff --git a/src/libflake/flake.cc b/src/libflake/flake.cc index 075708234..1dcc09d2d 100644 --- a/src/libflake/flake.cc +++ b/src/libflake/flake.cc @@ -234,8 +234,8 @@ static Flake readFlake( if (auto outputs = vInfo.attrs()->get(sOutputs)) { expectType(state, nFunction, *outputs->value, outputs->pos); - if (outputs->value->isLambda() && outputs->value->payload.lambda.fun->hasFormals()) { - for (auto & formal : outputs->value->payload.lambda.fun->formals->formals) { + if (outputs->value->isLambda() && outputs->value->lambda().fun->hasFormals()) { + for (auto & formal : outputs->value->lambda().fun->formals->formals) { if (formal.name != state.sSelf) flake.inputs.emplace(state.symbols[formal.name], FlakeInput { .ref = parseFlakeRef(state.fetchSettings, std::string(state.symbols[formal.name])) @@ -258,7 +258,7 @@ static Flake readFlake( state.symbols[setting.name], std::string(state.forceStringNoCtx(*setting.value, setting.pos, ""))); else if (setting.value->type() == nPath) { - auto storePath = fetchToStore(*state.store, setting.value->path(), FetchMode::Copy); + auto storePath = fetchToStore(state.fetchSettings, *state.store, setting.value->path(), FetchMode::Copy); flake.config.settings.emplace( state.symbols[setting.name], state.store->printStorePath(storePath)); @@ -273,7 +273,7 @@ static Flake readFlake( Explicit { state.forceBool(*setting.value, setting.pos, "") }); else if (setting.value->type() == nList) { std::vector ss; - for (auto elem : setting.value->listItems()) { + for (auto elem : setting.value->listView()) { if (elem->type() != nString) state.error("list element in flake configuration setting '%s' is %s while a string is expected", state.symbols[setting.name], showType(*setting.value)).debugThrow(); @@ -522,7 +522,7 @@ LockedFlake lockFlake( /* Resolve relative 'path:' inputs relative to the source path of the overrider. */ - auto overridenSourcePath = hasOverride ? i->second.sourcePath : sourcePath; + auto overriddenSourcePath = hasOverride ? i->second.sourcePath : sourcePath; /* Respect the "flakeness" of the input even if we override it. */ @@ -544,7 +544,7 @@ LockedFlake lockFlake( if (!input.ref) input.ref = FlakeRef::fromAttrs(state.fetchSettings, {{"type", "indirect"}, {"id", std::string(id)}}); - auto overridenParentPath = + auto overriddenParentPath = input.ref->input.isRelative() ? std::optional(hasOverride ? i->second.parentInputAttrPath : inputAttrPathPrefix) : std::nullopt; @@ -553,8 +553,8 @@ LockedFlake lockFlake( { if (auto relativePath = input.ref->input.isRelative()) { return SourcePath { - overridenSourcePath.accessor, - CanonPath(*relativePath, overridenSourcePath.path.parent().value()) + overriddenSourcePath.accessor, + CanonPath(*relativePath, overriddenSourcePath.path.parent().value()) }; } else return std::nullopt; @@ -589,7 +589,7 @@ LockedFlake lockFlake( if (oldLock && oldLock->originalRef.canonicalize() == input.ref->canonicalize() - && oldLock->parentInputAttrPath == overridenParentPath + && oldLock->parentInputAttrPath == overriddenParentPath && !hasCliOverride) { debug("keeping existing input '%s'", inputAttrPathS); @@ -711,7 +711,7 @@ LockedFlake lockFlake( inputFlake.lockedRef, ref, true, - overridenParentPath); + overriddenParentPath); node->inputs.insert_or_assign(id, childNode); @@ -760,7 +760,7 @@ LockedFlake lockFlake( } }(); - auto childNode = make_ref(lockedRef, ref, false, overridenParentPath); + auto childNode = make_ref(lockedRef, ref, false, overriddenParentPath); nodePaths.emplace(childNode, path); @@ -815,7 +815,7 @@ LockedFlake lockFlake( "Not writing lock file of flake '%s' because it has an unlocked input ('%s'). " "Use '--allow-dirty-locks' to allow this anyway.", topRef, *unlockedInput); if (state.fetchSettings.warnDirty) - warn("Not writing lock file of flake '%s' because it has an unlocked input ('%s')", topRef, *unlockedInput); + warn("not writing lock file of flake '%s' because it has an unlocked input ('%s')", topRef, *unlockedInput); } else { if (!lockFlags.updateLockFile) throw Error("flake '%s' requires lock file changes but they're not allowed due to '--no-update-lock-file'", topRef); diff --git a/src/libflake/flakeref.cc b/src/libflake/flakeref.cc index 12bddf578..37b7eff4c 100644 --- a/src/libflake/flakeref.cc +++ b/src/libflake/flakeref.cc @@ -15,7 +15,7 @@ const static std::string subDirRegex = subDirElemRegex + "(?:/" + subDirElemRege std::string FlakeRef::to_string() const { - std::map extraQuery; + StringMap extraQuery; if (subdir != "") extraQuery.insert_or_assign("dir", subdir); return input.toURLString(extraQuery); @@ -57,18 +57,6 @@ FlakeRef parseFlakeRef( return flakeRef; } -std::optional maybeParseFlakeRef( - const fetchers::Settings & fetchSettings, - const std::string & url, - const std::optional & baseDir) -{ - try { - return parseFlakeRef(fetchSettings, url, baseDir); - } catch (Error &) { - return {}; - } -} - static std::pair fromParsedURL( const fetchers::Settings & fetchSettings, ParsedURL && parsedURL, @@ -261,17 +249,6 @@ std::pair parseFlakeRefWithFragment( } } -std::optional> maybeParseFlakeRefWithFragment( - const fetchers::Settings & fetchSettings, - const std::string & url, const std::optional & baseDir) -{ - try { - return parseFlakeRefWithFragment(fetchSettings, url, baseDir); - } catch (Error & e) { - return {}; - } -} - FlakeRef FlakeRef::fromAttrs( const fetchers::Settings & fetchSettings, const fetchers::Attrs & attrs) diff --git a/src/libflake/include/nix/flake/flake.hh b/src/libflake/include/nix/flake/flake.hh index 50fd826af..8481aaa19 100644 --- a/src/libflake/include/nix/flake/flake.hh +++ b/src/libflake/include/nix/flake/flake.hh @@ -133,7 +133,7 @@ struct LockedFlake /** * Source tree accessors for nodes that have been fetched in - * lockFlake(); in particular, the root node and the overriden + * lockFlake(); in particular, the root node and the overridden * inputs. */ std::map, SourcePath> nodePaths; diff --git a/src/libflake/include/nix/flake/flakeref.hh b/src/libflake/include/nix/flake/flakeref.hh index 6184d2363..c0045fcf3 100644 --- a/src/libflake/include/nix/flake/flakeref.hh +++ b/src/libflake/include/nix/flake/flakeref.hh @@ -93,14 +93,6 @@ FlakeRef parseFlakeRef( bool isFlake = true, bool preserveRelativePaths = false); -/** - * @param baseDir Optional [base directory](https://nixos.org/manual/nix/unstable/glossary#gloss-base-directory) - */ -std::optional maybeParseFlake( - const fetchers::Settings & fetchSettings, - const std::string & url, - const std::optional & baseDir = {}); - /** * @param baseDir Optional [base directory](https://nixos.org/manual/nix/unstable/glossary#gloss-base-directory) */ @@ -112,14 +104,6 @@ std::pair parseFlakeRefWithFragment( bool isFlake = true, bool preserveRelativePaths = false); -/** - * @param baseDir Optional [base directory](https://nixos.org/manual/nix/unstable/glossary#gloss-base-directory) - */ -std::optional> maybeParseFlakeRefWithFragment( - const fetchers::Settings & fetchSettings, - const std::string & url, - const std::optional & baseDir = {}); - /** * @param baseDir Optional [base directory](https://nixos.org/manual/nix/unstable/glossary#gloss-base-directory) */ diff --git a/src/libmain/include/nix/main/shared.hh b/src/libmain/include/nix/main/shared.hh index 2ff57135b..4d4b816e7 100644 --- a/src/libmain/include/nix/main/shared.hh +++ b/src/libmain/include/nix/main/shared.hh @@ -35,15 +35,17 @@ void printVersion(const std::string & programName); void printGCWarning(); class Store; +struct MissingPaths; void printMissing( ref store, const std::vector & paths, Verbosity lvl = lvlInfo); -void printMissing(ref store, const StorePathSet & willBuild, - const StorePathSet & willSubstitute, const StorePathSet & unknown, - uint64_t downloadSize, uint64_t narSize, Verbosity lvl = lvlInfo); +void printMissing( + ref store, + const MissingPaths & missing, + Verbosity lvl = lvlInfo); std::string getArg(const std::string & opt, Strings::iterator & i, const Strings::iterator & end); diff --git a/src/libmain/progress-bar.cc b/src/libmain/progress-bar.cc index 23f5ff8f7..173ab876c 100644 --- a/src/libmain/progress-bar.cc +++ b/src/libmain/progress-bar.cc @@ -259,7 +259,7 @@ public: update(*state); } - /* Check whether an activity has an ancestore with the specified + /* Check whether an activity has an ancestor with the specified type. */ bool hasAncestor(State & state, ActivityType type, ActivityId act) { @@ -382,7 +382,7 @@ public: /** * Redraw, if the output has changed. * - * Excessive redrawing is noticable on slow terminals, and it interferes + * Excessive redrawing is noticeable on slow terminals, and it interferes * with text selection in some terminals, including libvte-based terminal * emulators. */ diff --git a/src/libmain/shared.cc b/src/libmain/shared.cc index d9e8059f7..853554099 100644 --- a/src/libmain/shared.cc +++ b/src/libmain/shared.cc @@ -46,43 +46,41 @@ void printGCWarning() void printMissing(ref store, const std::vector & paths, Verbosity lvl) { - uint64_t downloadSize, narSize; - StorePathSet willBuild, willSubstitute, unknown; - store->queryMissing(paths, willBuild, willSubstitute, unknown, downloadSize, narSize); - printMissing(store, willBuild, willSubstitute, unknown, downloadSize, narSize, lvl); + printMissing(store, store->queryMissing(paths), lvl); } -void printMissing(ref store, const StorePathSet & willBuild, - const StorePathSet & willSubstitute, const StorePathSet & unknown, - uint64_t downloadSize, uint64_t narSize, Verbosity lvl) +void printMissing( + ref store, + const MissingPaths & missing, + Verbosity lvl) { - if (!willBuild.empty()) { - if (willBuild.size() == 1) + if (!missing.willBuild.empty()) { + if (missing.willBuild.size() == 1) printMsg(lvl, "this derivation will be built:"); else - printMsg(lvl, "these %d derivations will be built:", willBuild.size()); - auto sorted = store->topoSortPaths(willBuild); + printMsg(lvl, "these %d derivations will be built:", missing.willBuild.size()); + auto sorted = store->topoSortPaths(missing.willBuild); reverse(sorted.begin(), sorted.end()); for (auto & i : sorted) printMsg(lvl, " %s", store->printStorePath(i)); } - if (!willSubstitute.empty()) { - const float downloadSizeMiB = downloadSize / (1024.f * 1024.f); - const float narSizeMiB = narSize / (1024.f * 1024.f); - if (willSubstitute.size() == 1) { + if (!missing.willSubstitute.empty()) { + const float downloadSizeMiB = missing.downloadSize / (1024.f * 1024.f); + const float narSizeMiB = missing.narSize / (1024.f * 1024.f); + if (missing.willSubstitute.size() == 1) { printMsg(lvl, "this path will be fetched (%.2f MiB download, %.2f MiB unpacked):", downloadSizeMiB, narSizeMiB); } else { printMsg(lvl, "these %d paths will be fetched (%.2f MiB download, %.2f MiB unpacked):", - willSubstitute.size(), + missing.willSubstitute.size(), downloadSizeMiB, narSizeMiB); } std::vector willSubstituteSorted = {}; - std::for_each(willSubstitute.begin(), willSubstitute.end(), + std::for_each(missing.willSubstitute.begin(), missing.willSubstitute.end(), [&](const StorePath &p) { willSubstituteSorted.push_back(&p); }); std::sort(willSubstituteSorted.begin(), willSubstituteSorted.end(), [](const StorePath *lhs, const StorePath *rhs) { @@ -95,10 +93,10 @@ void printMissing(ref store, const StorePathSet & willBuild, printMsg(lvl, " %s", store->printStorePath(*p)); } - if (!unknown.empty()) { + if (!missing.unknown.empty()) { printMsg(lvl, "don't know how to build these paths%s:", (settings.readOnlyMode ? " (may be caused by read-only store access)" : "")); - for (auto & i : unknown) + for (auto & i : missing.unknown) printMsg(lvl, " %s", store->printStorePath(i)); } } @@ -176,16 +174,6 @@ void initNix(bool loadConfig) now. In particular, store objects should be readable by everybody. */ umask(0022); - - /* Initialise the PRNG. */ - struct timeval tv; - gettimeofday(&tv, 0); -#ifndef _WIN32 - srandom(tv.tv_usec); -#endif - srand(tv.tv_usec); - - } @@ -327,29 +315,34 @@ int handleExceptions(const std::string & programName, std::function fun) std::string error = ANSI_RED "error:" ANSI_NORMAL " "; try { try { - fun(); - } catch (...) { - /* Subtle: we have to make sure that any `interrupted' - condition is discharged before we reach printMsg() - below, since otherwise it will throw an (uncaught) - exception. */ - setInterruptThrown(); - throw; + try { + fun(); + } catch (...) { + /* Subtle: we have to make sure that any `interrupted' + condition is discharged before we reach printMsg() + below, since otherwise it will throw an (uncaught) + exception. */ + setInterruptThrown(); + throw; + } + } catch (Exit & e) { + return e.status; + } catch (UsageError & e) { + logError(e.info()); + printError("Try '%1% --help' for more information.", programName); + return 1; + } catch (BaseError & e) { + logError(e.info()); + return e.info().status; + } catch (std::bad_alloc & e) { + printError(error + "out of memory"); + return 1; + } catch (std::exception & e) { + printError(error + e.what()); + return 1; } - } catch (Exit & e) { - return e.status; - } catch (UsageError & e) { - logError(e.info()); - printError("Try '%1% --help' for more information.", programName); - return 1; - } catch (BaseError & e) { - logError(e.info()); - return e.info().status; - } catch (std::bad_alloc & e) { - printError(error + "out of memory"); - return 1; - } catch (std::exception & e) { - printError(error + e.what()); + } catch (...) { + /* In case logger also throws just give up. */ return 1; } diff --git a/src/libstore-test-support/include/nix/store/tests/nix_api_store.hh b/src/libstore-test-support/include/nix/store/tests/nix_api_store.hh index 63f80cf91..e51be3dab 100644 --- a/src/libstore-test-support/include/nix/store/tests/nix_api_store.hh +++ b/src/libstore-test-support/include/nix/store/tests/nix_api_store.hh @@ -34,6 +34,8 @@ public: Store * store; std::string nixDir; std::string nixStoreDir; + std::string nixStateDir; + std::string nixLogDir; protected: void init_local_store() @@ -53,11 +55,13 @@ protected: #endif nixStoreDir = nixDir + "/my_nix_store"; + nixStateDir = nixDir + "/my_state"; + nixLogDir = nixDir + "/my_log"; // Options documented in `nix help-stores` const char * p1[] = {"store", nixStoreDir.c_str()}; - const char * p2[] = {"state", (new std::string(nixDir + "/my_state"))->c_str()}; - const char * p3[] = {"log", (new std::string(nixDir + "/my_log"))->c_str()}; + const char * p2[] = {"state", nixStateDir.c_str()}; + const char * p3[] = {"log", nixLogDir.c_str()}; const char ** params[] = {p1, p2, p3, nullptr}; diff --git a/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs-defaults.json b/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs-defaults.json index 7d3c932b2..183148b29 100644 --- a/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs-defaults.json +++ b/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs-defaults.json @@ -5,7 +5,6 @@ ], "builder": "/bin/bash", "env": { - "__json": "{\"builder\":\"/bin/bash\",\"name\":\"advanced-attributes-structured-attrs-defaults\",\"outputHashAlgo\":\"sha256\",\"outputHashMode\":\"recursive\",\"outputs\":[\"out\",\"dev\"],\"system\":\"my-system\"}", "dev": "/02qcpld1y6xhs5gz9bchpxaw0xdhmsp5dv88lh25r2ss44kh8dxz", "out": "/1rz4g4znpzjwh1xymhjpm42vipw92pr73vdgl6xs1hycac8kf2n9" }, @@ -22,5 +21,16 @@ "method": "nar" } }, + "structuredAttrs": { + "builder": "/bin/bash", + "name": "advanced-attributes-structured-attrs-defaults", + "outputHashAlgo": "sha256", + "outputHashMode": "recursive", + "outputs": [ + "out", + "dev" + ], + "system": "my-system" + }, "system": "my-system" } diff --git a/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs.json b/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs.json index a421efea7..ec044d778 100644 --- a/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs.json +++ b/src/libstore-tests/data/derivation/ca/advanced-attributes-structured-attrs.json @@ -5,7 +5,6 @@ ], "builder": "/bin/bash", "env": { - "__json": "{\"__darwinAllowLocalNetworking\":true,\"__impureHostDeps\":[\"/usr/bin/ditto\"],\"__noChroot\":true,\"__sandboxProfile\":\"sandcastle\",\"allowSubstitutes\":false,\"builder\":\"/bin/bash\",\"exportReferencesGraph\":{\"refs1\":[\"/164j69y6zir9z0339n8pjigg3rckinlr77bxsavzizdaaljb7nh9\"],\"refs2\":[\"/nix/store/qnml92yh97a6fbrs2m5qg5cqlc8vni58-bar.drv\"]},\"impureEnvVars\":[\"UNICORN\"],\"name\":\"advanced-attributes-structured-attrs\",\"outputChecks\":{\"bin\":{\"disallowedReferences\":[\"/0nyw57wm2iicnm9rglvjmbci3ikmcp823czdqdzdcgsnnwqps71g\"],\"disallowedRequisites\":[\"/07f301yqyz8c6wf6bbbavb2q39j4n8kmcly1s09xadyhgy6x2wr8\"]},\"dev\":{\"maxClosureSize\":5909,\"maxSize\":789},\"out\":{\"allowedReferences\":[\"/164j69y6zir9z0339n8pjigg3rckinlr77bxsavzizdaaljb7nh9\"],\"allowedRequisites\":[\"/0nr45p69vn6izw9446wsh9bng9nndhvn19kpsm4n96a5mycw0s4z\"]}},\"outputHashAlgo\":\"sha256\",\"outputHashMode\":\"recursive\",\"outputs\":[\"out\",\"bin\",\"dev\"],\"preferLocalBuild\":true,\"requiredSystemFeatures\":[\"rainbow\",\"uid-range\"],\"system\":\"my-system\"}", "bin": "/04f3da1kmbr67m3gzxikmsl4vjz5zf777sv6m14ahv22r65aac9m", "dev": "/02qcpld1y6xhs5gz9bchpxaw0xdhmsp5dv88lh25r2ss44kh8dxz", "out": "/1rz4g4znpzjwh1xymhjpm42vipw92pr73vdgl6xs1hycac8kf2n9" @@ -44,5 +43,62 @@ "method": "nar" } }, + "structuredAttrs": { + "__darwinAllowLocalNetworking": true, + "__impureHostDeps": [ + "/usr/bin/ditto" + ], + "__noChroot": true, + "__sandboxProfile": "sandcastle", + "allowSubstitutes": false, + "builder": "/bin/bash", + "exportReferencesGraph": { + "refs1": [ + "/164j69y6zir9z0339n8pjigg3rckinlr77bxsavzizdaaljb7nh9" + ], + "refs2": [ + "/nix/store/qnml92yh97a6fbrs2m5qg5cqlc8vni58-bar.drv" + ] + }, + "impureEnvVars": [ + "UNICORN" + ], + "name": "advanced-attributes-structured-attrs", + "outputChecks": { + "bin": { + "disallowedReferences": [ + "/0nyw57wm2iicnm9rglvjmbci3ikmcp823czdqdzdcgsnnwqps71g" + ], + "disallowedRequisites": [ + "/07f301yqyz8c6wf6bbbavb2q39j4n8kmcly1s09xadyhgy6x2wr8" + ] + }, + "dev": { + "maxClosureSize": 5909, + "maxSize": 789 + }, + "out": { + "allowedReferences": [ + "/164j69y6zir9z0339n8pjigg3rckinlr77bxsavzizdaaljb7nh9" + ], + "allowedRequisites": [ + "/0nr45p69vn6izw9446wsh9bng9nndhvn19kpsm4n96a5mycw0s4z" + ] + } + }, + "outputHashAlgo": "sha256", + "outputHashMode": "recursive", + "outputs": [ + "out", + "bin", + "dev" + ], + "preferLocalBuild": true, + "requiredSystemFeatures": [ + "rainbow", + "uid-range" + ], + "system": "my-system" + }, "system": "my-system" } diff --git a/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs-defaults.json b/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs-defaults.json index 473d006ac..f5349e6c3 100644 --- a/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs-defaults.json +++ b/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs-defaults.json @@ -5,7 +5,6 @@ ], "builder": "/bin/bash", "env": { - "__json": "{\"builder\":\"/bin/bash\",\"name\":\"advanced-attributes-structured-attrs-defaults\",\"outputs\":[\"out\",\"dev\"],\"system\":\"my-system\"}", "dev": "/nix/store/8bazivnbipbyi569623skw5zm91z6kc2-advanced-attributes-structured-attrs-defaults-dev", "out": "/nix/store/f8f8nvnx32bxvyxyx2ff7akbvwhwd9dw-advanced-attributes-structured-attrs-defaults" }, @@ -20,5 +19,14 @@ "path": "/nix/store/f8f8nvnx32bxvyxyx2ff7akbvwhwd9dw-advanced-attributes-structured-attrs-defaults" } }, + "structuredAttrs": { + "builder": "/bin/bash", + "name": "advanced-attributes-structured-attrs-defaults", + "outputs": [ + "out", + "dev" + ], + "system": "my-system" + }, "system": "my-system" } diff --git a/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs.json b/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs.json index d68502d56..b8d566462 100644 --- a/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs.json +++ b/src/libstore-tests/data/derivation/ia/advanced-attributes-structured-attrs.json @@ -5,7 +5,6 @@ ], "builder": "/bin/bash", "env": { - "__json": "{\"__darwinAllowLocalNetworking\":true,\"__impureHostDeps\":[\"/usr/bin/ditto\"],\"__noChroot\":true,\"__sandboxProfile\":\"sandcastle\",\"allowSubstitutes\":false,\"builder\":\"/bin/bash\",\"exportReferencesGraph\":{\"refs1\":[\"/nix/store/p0hax2lzvjpfc2gwkk62xdglz0fcqfzn-foo\"],\"refs2\":[\"/nix/store/vj2i49jm2868j2fmqvxm70vlzmzvgv14-bar.drv\"]},\"impureEnvVars\":[\"UNICORN\"],\"name\":\"advanced-attributes-structured-attrs\",\"outputChecks\":{\"bin\":{\"disallowedReferences\":[\"/nix/store/r5cff30838majxk5mp3ip2diffi8vpaj-bar\"],\"disallowedRequisites\":[\"/nix/store/9b61w26b4avv870dw0ymb6rw4r1hzpws-bar-dev\"]},\"dev\":{\"maxClosureSize\":5909,\"maxSize\":789},\"out\":{\"allowedReferences\":[\"/nix/store/p0hax2lzvjpfc2gwkk62xdglz0fcqfzn-foo\"],\"allowedRequisites\":[\"/nix/store/z0rjzy29v9k5qa4nqpykrbzirj7sd43v-foo-dev\"]}},\"outputs\":[\"out\",\"bin\",\"dev\"],\"preferLocalBuild\":true,\"requiredSystemFeatures\":[\"rainbow\",\"uid-range\"],\"system\":\"my-system\"}", "bin": "/nix/store/33qms3h55wlaspzba3brlzlrm8m2239g-advanced-attributes-structured-attrs-bin", "dev": "/nix/store/wyfgwsdi8rs851wmy1xfzdxy7y5vrg5l-advanced-attributes-structured-attrs-dev", "out": "/nix/store/7cxy4zx1vqc885r4jl2l64pymqbdmhii-advanced-attributes-structured-attrs" @@ -41,5 +40,60 @@ "path": "/nix/store/7cxy4zx1vqc885r4jl2l64pymqbdmhii-advanced-attributes-structured-attrs" } }, + "structuredAttrs": { + "__darwinAllowLocalNetworking": true, + "__impureHostDeps": [ + "/usr/bin/ditto" + ], + "__noChroot": true, + "__sandboxProfile": "sandcastle", + "allowSubstitutes": false, + "builder": "/bin/bash", + "exportReferencesGraph": { + "refs1": [ + "/nix/store/p0hax2lzvjpfc2gwkk62xdglz0fcqfzn-foo" + ], + "refs2": [ + "/nix/store/vj2i49jm2868j2fmqvxm70vlzmzvgv14-bar.drv" + ] + }, + "impureEnvVars": [ + "UNICORN" + ], + "name": "advanced-attributes-structured-attrs", + "outputChecks": { + "bin": { + "disallowedReferences": [ + "/nix/store/r5cff30838majxk5mp3ip2diffi8vpaj-bar" + ], + "disallowedRequisites": [ + "/nix/store/9b61w26b4avv870dw0ymb6rw4r1hzpws-bar-dev" + ] + }, + "dev": { + "maxClosureSize": 5909, + "maxSize": 789 + }, + "out": { + "allowedReferences": [ + "/nix/store/p0hax2lzvjpfc2gwkk62xdglz0fcqfzn-foo" + ], + "allowedRequisites": [ + "/nix/store/z0rjzy29v9k5qa4nqpykrbzirj7sd43v-foo-dev" + ] + } + }, + "outputs": [ + "out", + "bin", + "dev" + ], + "preferLocalBuild": true, + "requiredSystemFeatures": [ + "rainbow", + "uid-range" + ], + "system": "my-system" + }, "system": "my-system" } diff --git a/src/libstore-tests/machines.cc b/src/libstore-tests/machines.cc index 8873ff183..f11866e08 100644 --- a/src/libstore-tests/machines.cc +++ b/src/libstore-tests/machines.cc @@ -87,7 +87,7 @@ TEST(machines, getMachinesWithCommentsAndSemicolonSeparator) { TEST(machines, getMachinesWithFunnyWhitespace) { auto actual = Machine::parseConfig({}, - " # commment ; comment\n" + " # comment ; comment\n" " nix@scratchy.labs.cs.uu.nl ; nix@itchy.labs.cs.uu.nl \n" "\n \n" "\n ;;; \n" diff --git a/src/libstore-tests/nix_api_store.cc b/src/libstore-tests/nix_api_store.cc index b7495e0ab..05373cb88 100644 --- a/src/libstore-tests/nix_api_store.cc +++ b/src/libstore-tests/nix_api_store.cc @@ -67,17 +67,21 @@ TEST_F(nix_api_store_test, ReturnsValidStorePath) ASSERT_NE(result, nullptr); ASSERT_STREQ("name", result->path.name().data()); ASSERT_STREQ(PATH_SUFFIX.substr(1).c_str(), result->path.to_string().data()); + nix_store_path_free(result); } TEST_F(nix_api_store_test, SetsLastErrCodeToNixOk) { - nix_store_parse_path(ctx, store, (nixStoreDir + PATH_SUFFIX).c_str()); + StorePath * path = nix_store_parse_path(ctx, store, (nixStoreDir + PATH_SUFFIX).c_str()); ASSERT_EQ(ctx->last_err_code, NIX_OK); + nix_store_path_free(path); } TEST_F(nix_api_store_test, DoesNotCrashWhenContextIsNull) { - ASSERT_NO_THROW(nix_store_parse_path(ctx, store, (nixStoreDir + PATH_SUFFIX).c_str())); + StorePath * path = nullptr; + ASSERT_NO_THROW(path = nix_store_parse_path(ctx, store, (nixStoreDir + PATH_SUFFIX).c_str())); + nix_store_path_free(path); } TEST_F(nix_api_store_test, get_version) @@ -115,6 +119,7 @@ TEST_F(nix_api_store_test, nix_store_is_valid_path_not_in_store) { StorePath * path = nix_store_parse_path(ctx, store, (nixStoreDir + PATH_SUFFIX).c_str()); ASSERT_EQ(false, nix_store_is_valid_path(ctx, store, path)); + nix_store_path_free(path); } TEST_F(nix_api_store_test, nix_store_real_path) diff --git a/src/libstore-tests/outputs-spec.cc b/src/libstore-tests/outputs-spec.cc index a1c13d2f8..12f285e0d 100644 --- a/src/libstore-tests/outputs-spec.cc +++ b/src/libstore-tests/outputs-spec.cc @@ -46,7 +46,7 @@ TEST(OutputsSpec, names_underscore) { ASSERT_EQ(expected.to_string(), str); } -TEST(OutputsSpec, names_numberic) { +TEST(OutputsSpec, names_numeric) { std::string_view str = "01"; OutputsSpec expected = OutputsSpec::Names { "01" }; ASSERT_EQ(OutputsSpec::parse(str), expected); @@ -126,7 +126,7 @@ TEST_DONT_PARSE(star_second, "^foo,*") #undef TEST_DONT_PARSE -TEST(ExtendedOutputsSpec, defeault) { +TEST(ExtendedOutputsSpec, default) { std::string_view str = "foo"; auto [prefix, extendedOutputsSpec] = ExtendedOutputsSpec::parse(str); ASSERT_EQ(prefix, "foo"); diff --git a/src/libstore/build/derivation-building-goal.cc b/src/libstore/build/derivation-building-goal.cc new file mode 100644 index 000000000..53b5f7eb3 --- /dev/null +++ b/src/libstore/build/derivation-building-goal.cc @@ -0,0 +1,1257 @@ +#include "nix/store/build/derivation-building-goal.hh" +#include "nix/store/build/derivation-goal.hh" +#ifndef _WIN32 // TODO enable build hook on Windows +# include "nix/store/build/hook-instance.hh" +# include "nix/store/build/derivation-builder.hh" +#endif +#include "nix/util/processes.hh" +#include "nix/util/config-global.hh" +#include "nix/store/build/worker.hh" +#include "nix/util/util.hh" +#include "nix/util/compression.hh" +#include "nix/store/common-protocol.hh" +#include "nix/store/common-protocol-impl.hh" +#include "nix/store/local-store.hh" // TODO remove, along with remaining downcasts + +#include +#include +#include +#include + +#include + +#include "nix/util/strings.hh" + +namespace nix { + +DerivationBuildingGoal::DerivationBuildingGoal(const StorePath & drvPath, const Derivation & drv_, + Worker & worker, BuildMode buildMode) + : Goal(worker, gaveUpOnSubstitution()) + , drvPath(drvPath) + , buildMode(buildMode) +{ + drv = std::make_unique(drv_); + + if (auto parsedOpt = StructuredAttrs::tryParse(drv->env)) { + parsedDrv = std::make_unique(*parsedOpt); + } + try { + drvOptions = std::make_unique( + DerivationOptions::fromStructuredAttrs(drv->env, parsedDrv.get())); + } catch (Error & e) { + e.addTrace({}, "while parsing derivation '%s'", worker.store.printStorePath(drvPath)); + throw; + } + + name = fmt("building of '%s' from in-memory derivation", worker.store.printStorePath(drvPath)); + trace("created"); + + /* Prevent the .chroot directory from being + garbage-collected. (See isActiveTempFile() in gc.cc.) */ + worker.store.addTempRoot(this->drvPath); +} + + +DerivationBuildingGoal::~DerivationBuildingGoal() +{ + /* Careful: we should never ever throw an exception from a + destructor. */ + try { killChild(); } catch (...) { ignoreExceptionInDestructor(); } +#ifndef _WIN32 // TODO enable `DerivationBuilder` on Windows + if (builder) { + try { builder->stopDaemon(); } catch (...) { ignoreExceptionInDestructor(); } + try { builder->deleteTmpDir(false); } catch (...) { ignoreExceptionInDestructor(); } + } +#endif + try { closeLogFile(); } catch (...) { ignoreExceptionInDestructor(); } +} + + +std::string DerivationBuildingGoal::key() +{ + /* Ensure that derivations get built in order of their name, + i.e. a derivation named "aardvark" always comes before + "baboon". And substitution goals always happen before + derivation goals (due to "b$"). */ + return "bd$" + std::string(drvPath.name()) + "$" + worker.store.printStorePath(drvPath); +} + + +void DerivationBuildingGoal::killChild() +{ +#ifndef _WIN32 // TODO enable build hook on Windows + hook.reset(); +#endif +#ifndef _WIN32 // TODO enable `DerivationBuilder` on Windows + if (builder && builder->pid != -1) { + worker.childTerminated(this); + + // FIXME: move this into DerivationBuilder. + + /* If we're using a build user, then there is a tricky race + condition: if we kill the build user before the child has + done its setuid() to the build user uid, then it won't be + killed, and we'll potentially lock up in pid.wait(). So + also send a conventional kill to the child. */ + ::kill(-builder->pid, SIGKILL); /* ignore the result */ + + builder->killSandbox(true); + + builder->pid.wait(); + } +#endif +} + + +void DerivationBuildingGoal::timedOut(Error && ex) +{ + killChild(); + // We're not inside a coroutine, hence we can't use co_return here. + // Thus we ignore the return value. + [[maybe_unused]] Done _ = done(BuildResult::TimedOut, {}, std::move(ex)); +} + + +/** + * Used for `inputGoals` local variable below + */ +struct value_comparison +{ + template + bool operator()(const ref & lhs, const ref & rhs) const { + return *lhs < *rhs; + } +}; + + +std::string showKnownOutputs(Store & store, const Derivation & drv) +{ + std::string msg; + StorePathSet expectedOutputPaths; + for (auto & i : drv.outputsAndOptPaths(store)) + if (i.second.second) + expectedOutputPaths.insert(*i.second.second); + if (!expectedOutputPaths.empty()) { + msg += "\nOutput paths:"; + for (auto & p : expectedOutputPaths) + msg += fmt("\n %s", Magenta(store.printStorePath(p))); + } + return msg; +} + + +/* At least one of the output paths could not be + produced using a substitute. So we have to build instead. */ +Goal::Co DerivationBuildingGoal::gaveUpOnSubstitution() +{ + Goals waitees; + + std::map, GoalPtr, value_comparison> inputGoals; + + { + std::function, const DerivedPathMap::ChildNode &)> addWaiteeDerivedPath; + + addWaiteeDerivedPath = [&](ref inputDrv, const DerivedPathMap::ChildNode & inputNode) { + if (!inputNode.value.empty()) { + auto g = worker.makeGoal( + DerivedPath::Built { + .drvPath = inputDrv, + .outputs = inputNode.value, + }, + buildMode == bmRepair ? bmRepair : bmNormal); + inputGoals.insert_or_assign(inputDrv, g); + waitees.insert(std::move(g)); + } + for (const auto & [outputName, childNode] : inputNode.childMap) + addWaiteeDerivedPath( + make_ref(SingleDerivedPath::Built { inputDrv, outputName }), + childNode); + }; + + for (const auto & [inputDrvPath, inputNode] : drv->inputDrvs.map) { + /* Ensure that pure, non-fixed-output derivations don't + depend on impure derivations. */ + if (experimentalFeatureSettings.isEnabled(Xp::ImpureDerivations) && !drv->type().isImpure() && !drv->type().isFixed()) { + auto inputDrv = worker.evalStore.readDerivation(inputDrvPath); + if (inputDrv.type().isImpure()) + throw Error("pure derivation '%s' depends on impure derivation '%s'", + worker.store.printStorePath(drvPath), + worker.store.printStorePath(inputDrvPath)); + } + + addWaiteeDerivedPath(makeConstantStorePathRef(inputDrvPath), inputNode); + } + } + + /* Copy the input sources from the eval store to the build + store. + + Note that some inputs might not be in the eval store because they + are (resolved) derivation outputs in a resolved derivation. */ + if (&worker.evalStore != &worker.store) { + RealisedPath::Set inputSrcs; + for (auto & i : drv->inputSrcs) + if (worker.evalStore.isValidPath(i)) + inputSrcs.insert(i); + copyClosure(worker.evalStore, worker.store, inputSrcs); + } + + for (auto & i : drv->inputSrcs) { + if (worker.store.isValidPath(i)) continue; + if (!settings.useSubstitutes) + throw Error("dependency '%s' of '%s' does not exist, and substitution is disabled", + worker.store.printStorePath(i), worker.store.printStorePath(drvPath)); + waitees.insert(upcast_goal(worker.makePathSubstitutionGoal(i))); + } + + co_await await(std::move(waitees)); + + + trace("all inputs realised"); + + if (nrFailed != 0) { + auto msg = fmt( + "Cannot build '%s'.\n" + "Reason: " ANSI_RED "%d %s failed" ANSI_NORMAL ".", + Magenta(worker.store.printStorePath(drvPath)), + nrFailed, + nrFailed == 1 ? "dependency" : "dependencies"); + msg += showKnownOutputs(worker.store, *drv); + co_return done(BuildResult::DependencyFailed, {}, Error(msg)); + } + + /* Gather information necessary for computing the closure and/or + running the build hook. */ + + /* Determine the full set of input paths. */ + + /* First, the input derivations. */ + { + auto & fullDrv = *drv; + + auto drvType = fullDrv.type(); + bool resolveDrv = std::visit(overloaded { + [&](const DerivationType::InputAddressed & ia) { + /* must resolve if deferred. */ + return ia.deferred; + }, + [&](const DerivationType::ContentAddressed & ca) { + return !fullDrv.inputDrvs.map.empty() && ( + ca.fixed + /* Can optionally resolve if fixed, which is good + for avoiding unnecessary rebuilds. */ + ? experimentalFeatureSettings.isEnabled(Xp::CaDerivations) + /* Must resolve if floating and there are any inputs + drvs. */ + : true); + }, + [&](const DerivationType::Impure &) { + return true; + } + }, drvType.raw) + /* no inputs are outputs of dynamic derivations */ + || std::ranges::any_of( + fullDrv.inputDrvs.map.begin(), + fullDrv.inputDrvs.map.end(), + [](auto & pair) { return !pair.second.childMap.empty(); }); + + if (resolveDrv && !fullDrv.inputDrvs.map.empty()) { + experimentalFeatureSettings.require(Xp::CaDerivations); + + /* We are be able to resolve this derivation based on the + now-known results of dependencies. If so, we become a + stub goal aliasing that resolved derivation goal. */ + std::optional attempt = fullDrv.tryResolve(worker.store, + [&](ref drvPath, const std::string & outputName) -> std::optional { + auto mEntry = get(inputGoals, drvPath); + if (!mEntry) return std::nullopt; + + auto buildResult = (*mEntry)->getBuildResult(DerivedPath::Built{drvPath, OutputsSpec::Names{outputName}}); + if (!buildResult.success()) return std::nullopt; + + auto i = get(buildResult.builtOutputs, outputName); + if (!i) return std::nullopt; + + return i->outPath; + }); + if (!attempt) { + /* TODO (impure derivations-induced tech debt) (see below): + The above attempt should have found it, but because we manage + inputDrvOutputs statefully, sometimes it gets out of sync with + the real source of truth (store). So we query the store + directly if there's a problem. */ + attempt = fullDrv.tryResolve(worker.store, &worker.evalStore); + } + assert(attempt); + Derivation drvResolved { std::move(*attempt) }; + + auto pathResolved = writeDerivation(worker.store, drvResolved); + + auto msg = fmt("resolved derivation: '%s' -> '%s'", + worker.store.printStorePath(drvPath), + worker.store.printStorePath(pathResolved)); + act = std::make_unique(*logger, lvlInfo, actBuildWaiting, msg, + Logger::Fields { + worker.store.printStorePath(drvPath), + worker.store.printStorePath(pathResolved), + }); + + // FIXME wanted outputs + auto resolvedDrvGoal = worker.makeDerivationGoal( + makeConstantStorePathRef(pathResolved), OutputsSpec::All{}, buildMode); + { + Goals waitees{resolvedDrvGoal}; + co_await await(std::move(waitees)); + } + + trace("resolved derivation finished"); + + auto resolvedDrv = *resolvedDrvGoal->drv; + auto resolvedResult = resolvedDrvGoal->getBuildResult(DerivedPath::Built{ + .drvPath = makeConstantStorePathRef(pathResolved), + .outputs = OutputsSpec::All{}, + }); + + SingleDrvOutputs builtOutputs; + + if (resolvedResult.success()) { + auto resolvedHashes = staticOutputHashes(worker.store, resolvedDrv); + + StorePathSet outputPaths; + + for (auto & outputName : resolvedDrv.outputNames()) { + auto initialOutput = get(initialOutputs, outputName); + auto resolvedHash = get(resolvedHashes, outputName); + if ((!initialOutput) || (!resolvedHash)) + throw Error( + "derivation '%s' doesn't have expected output '%s' (derivation-goal.cc/resolve)", + worker.store.printStorePath(drvPath), outputName); + + auto realisation = [&]{ + auto take1 = get(resolvedResult.builtOutputs, outputName); + if (take1) return *take1; + + /* The above `get` should work. But stateful tracking of + outputs in resolvedResult, this can get out of sync with the + store, which is our actual source of truth. For now we just + check the store directly if it fails. */ + auto take2 = worker.evalStore.queryRealisation(DrvOutput { *resolvedHash, outputName }); + if (take2) return *take2; + + throw Error( + "derivation '%s' doesn't have expected output '%s' (derivation-goal.cc/realisation)", + resolvedDrvGoal->drvReq->to_string(worker.store), outputName); + }(); + + if (!drv->type().isImpure()) { + auto newRealisation = realisation; + newRealisation.id = DrvOutput { initialOutput->outputHash, outputName }; + newRealisation.signatures.clear(); + if (!drv->type().isFixed()) { + auto & drvStore = worker.evalStore.isValidPath(drvPath) + ? worker.evalStore + : worker.store; + newRealisation.dependentRealisations = drvOutputReferences(worker.store, *drv, realisation.outPath, &drvStore); + } + worker.store.signRealisation(newRealisation); + worker.store.registerDrvOutput(newRealisation); + } + outputPaths.insert(realisation.outPath); + builtOutputs.emplace(outputName, realisation); + } + + runPostBuildHook( + worker.store, + *logger, + drvPath, + outputPaths + ); + } + + auto status = resolvedResult.status; + if (status == BuildResult::AlreadyValid) + status = BuildResult::ResolvesToAlreadyValid; + + co_return done(status, std::move(builtOutputs)); + } + + /* If we get this far, we know no dynamic drvs inputs */ + + for (auto & [depDrvPath, depNode] : fullDrv.inputDrvs.map) { + for (auto & outputName : depNode.value) { + /* Don't need to worry about `inputGoals`, because + impure derivations are always resolved above. Can + just use DB. This case only happens in the (older) + input addressed and fixed output derivation cases. */ + auto outMap = [&]{ + for (auto * drvStore : { &worker.evalStore, &worker.store }) + if (drvStore->isValidPath(depDrvPath)) + return worker.store.queryDerivationOutputMap(depDrvPath, drvStore); + assert(false); + }(); + + auto outMapPath = outMap.find(outputName); + if (outMapPath == outMap.end()) { + throw Error( + "derivation '%s' requires non-existent output '%s' from input derivation '%s'", + worker.store.printStorePath(drvPath), outputName, worker.store.printStorePath(depDrvPath)); + } + + worker.store.computeFSClosure(outMapPath->second, inputPaths); + } + } + } + + /* Second, the input sources. */ + worker.store.computeFSClosure(drv->inputSrcs, inputPaths); + + debug("added input paths %s", worker.store.showPaths(inputPaths)); + + /* Okay, try to build. Note that here we don't wait for a build + slot to become available, since we don't need one if there is a + build hook. */ + co_await yield(); + co_return tryToBuild(); +} + +void DerivationBuildingGoal::started() +{ + auto msg = fmt( + buildMode == bmRepair ? "repairing outputs of '%s'" : + buildMode == bmCheck ? "checking outputs of '%s'" : + "building '%s'", worker.store.printStorePath(drvPath)); + fmt("building '%s'", worker.store.printStorePath(drvPath)); +#ifndef _WIN32 // TODO enable build hook on Windows + if (hook) msg += fmt(" on '%s'", machineName); +#endif + act = std::make_unique(*logger, lvlInfo, actBuild, msg, + Logger::Fields{worker.store.printStorePath(drvPath), +#ifndef _WIN32 // TODO enable build hook on Windows + hook ? machineName : +#endif + "", + 1, + 1}); + mcRunningBuilds = std::make_unique>(worker.runningBuilds); + worker.updateProgress(); +} + +Goal::Co DerivationBuildingGoal::tryToBuild() +{ + trace("trying to build"); + + /* Obtain locks on all output paths, if the paths are known a priori. + + The locks are automatically released when we exit this function or Nix + crashes. If we can't acquire the lock, then continue; hopefully some + other goal can start a build, and if not, the main loop will sleep a few + seconds and then retry this goal. */ + PathSet lockFiles; + /* FIXME: Should lock something like the drv itself so we don't build same + CA drv concurrently */ + if (dynamic_cast(&worker.store)) { + /* If we aren't a local store, we might need to use the local store as + a build remote, but that would cause a deadlock. */ + /* FIXME: Make it so we can use ourselves as a build remote even if we + are the local store (separate locking for building vs scheduling? */ + /* FIXME: find some way to lock for scheduling for the other stores so + a forking daemon with --store still won't farm out redundant builds. + */ + for (auto & i : drv->outputsAndOptPaths(worker.store)) { + if (i.second.second) + lockFiles.insert(worker.store.Store::toRealPath(*i.second.second)); + else + lockFiles.insert( + worker.store.Store::toRealPath(drvPath) + "." + i.first + ); + } + } + + if (!outputLocks.lockPaths(lockFiles, "", false)) + { + Activity act(*logger, lvlWarn, actBuildWaiting, + fmt("waiting for lock on %s", Magenta(showPaths(lockFiles)))); + + /* Wait then try locking again, repeat until success (returned + boolean is true). */ + do { + co_await waitForAWhile(); + } while (!outputLocks.lockPaths(lockFiles, "", false)); + } + + /* Now check again whether the outputs are valid. This is because + another process may have started building in parallel. After + it has finished and released the locks, we can (and should) + reuse its results. (Strictly speaking the first check can be + omitted, but that would be less efficient.) Note that since we + now hold the locks on the output paths, no other process can + build this derivation, so no further checks are necessary. */ + auto [allValid, validOutputs] = checkPathValidity(); + + if (buildMode != bmCheck && allValid) { + debug("skipping build of derivation '%s', someone beat us to it", worker.store.printStorePath(drvPath)); + outputLocks.setDeletion(true); + outputLocks.unlock(); + co_return done(BuildResult::AlreadyValid, std::move(validOutputs)); + } + + /* If any of the outputs already exist but are not valid, delete + them. */ + for (auto & [_, status] : initialOutputs) { + if (!status.known || status.known->isValid()) continue; + auto storePath = status.known->path; + debug("removing invalid path '%s'", worker.store.printStorePath(status.known->path)); + deletePath(worker.store.Store::toRealPath(storePath)); + } + + /* Don't do a remote build if the derivation has the attribute + `preferLocalBuild' set. Also, check and repair modes are only + supported for local builds. */ + bool buildLocally = + (buildMode != bmNormal || drvOptions->willBuildLocally(worker.store, *drv)) + && settings.maxBuildJobs.get() != 0; + + if (!buildLocally) { + switch (tryBuildHook()) { + case rpAccept: + /* Yes, it has started doing so. Wait until we get + EOF from the hook. */ + actLock.reset(); + buildResult.startTime = time(0); // inexact + started(); + co_await Suspend{}; + co_return hookDone(); + case rpPostpone: + /* Not now; wait until at least one child finishes or + the wake-up timeout expires. */ + if (!actLock) + actLock = std::make_unique(*logger, lvlWarn, actBuildWaiting, + fmt("waiting for a machine to build '%s'", Magenta(worker.store.printStorePath(drvPath)))); + outputLocks.unlock(); + co_await waitForAWhile(); + co_return tryToBuild(); + case rpDecline: + /* We should do it ourselves. */ + break; + } + } + + actLock.reset(); + + co_await yield(); + + if (!dynamic_cast(&worker.store)) { + throw Error( + R"( + Unable to build with a primary store that isn't a local store; + either pass a different '--store' or enable remote builds. + + For more information check 'man nix.conf' and search for '/machines'. + )" + ); + } + +#ifdef _WIN32 // TODO enable `DerivationBuilder` on Windows + throw UnimplementedError("building derivations is not yet implemented on Windows"); +#else + + // Will continue here while waiting for a build user below + while (true) { + + assert(!hook); + + unsigned int curBuilds = worker.getNrLocalBuilds(); + if (curBuilds >= settings.maxBuildJobs) { + outputLocks.unlock(); + co_await waitForBuildSlot(); + co_return tryToBuild(); + } + + if (!builder) { + /** + * Local implementation of these virtual methods, consider + * this just a record of lambdas. + */ + struct DerivationBuildingGoalCallbacks : DerivationBuilderCallbacks + { + DerivationBuildingGoal & goal; + + DerivationBuildingGoalCallbacks(DerivationBuildingGoal & goal, std::unique_ptr & builder) + : goal{goal} + {} + + ~DerivationBuildingGoalCallbacks() override = default; + + void childStarted(Descriptor builderOut) override + { + goal.worker.childStarted(goal.shared_from_this(), {builderOut}, true, true); + } + + void childTerminated() override + { + goal.worker.childTerminated(&goal); + } + + void noteHashMismatch() override + { + goal.worker.hashMismatch = true; + } + + void noteCheckMismatch() override + { + goal.worker.checkMismatch = true; + } + + void markContentsGood(const StorePath & path) override + { + goal.worker.markContentsGood(path); + } + + Path openLogFile() override { + return goal.openLogFile(); + } + void closeLogFile() override { + goal.closeLogFile(); + } + void appendLogTailErrorMsg(std::string & msg) override { + goal.appendLogTailErrorMsg(msg); + } + }; + + /* If we have to wait and retry (see below), then `builder` will + already be created, so we don't need to create it again. */ + builder = makeDerivationBuilder( + worker.store, + std::make_unique(*this, builder), + DerivationBuilderParams { + drvPath, + buildMode, + buildResult, + *drv, + parsedDrv.get(), + *drvOptions, + inputPaths, + initialOutputs, + act + }); + } + + if (!builder->prepareBuild()) { + if (!actLock) + actLock = std::make_unique(*logger, lvlWarn, actBuildWaiting, + fmt("waiting for a free build user ID for '%s'", Magenta(worker.store.printStorePath(drvPath)))); + co_await waitForAWhile(); + continue; + } + + break; + } + + actLock.reset(); + + try { + + /* Okay, we have to build. */ + builder->startBuilder(); + + } catch (BuildError & e) { + builder.reset(); + outputLocks.unlock(); + worker.permanentFailure = true; + co_return done(BuildResult::InputRejected, {}, std::move(e)); + } + + started(); + co_await Suspend{}; + + trace("build done"); + + auto res = builder->unprepareBuild(); + // N.B. cannot use `std::visit` with co-routine return + if (auto * ste = std::get_if<0>(&res)) { + outputLocks.unlock(); + co_return done(std::move(ste->first), {}, std::move(ste->second)); + } else if (auto * builtOutputs = std::get_if<1>(&res)) { + /* It is now safe to delete the lock files, since all future + lockers will see that the output paths are valid; they will + not create new lock files with the same names as the old + (unlinked) lock files. */ + outputLocks.setDeletion(true); + outputLocks.unlock(); + co_return done(BuildResult::Built, std::move(*builtOutputs)); + } else { + unreachable(); + } +#endif +} + + +void runPostBuildHook( + Store & store, + Logger & logger, + const StorePath & drvPath, + const StorePathSet & outputPaths) +{ + auto hook = settings.postBuildHook; + if (hook == "") + return; + + Activity act(logger, lvlTalkative, actPostBuildHook, + fmt("running post-build-hook '%s'", settings.postBuildHook), + Logger::Fields{store.printStorePath(drvPath)}); + PushActivity pact(act.id); + StringMap hookEnvironment = getEnv(); + + hookEnvironment.emplace("DRV_PATH", store.printStorePath(drvPath)); + hookEnvironment.emplace("OUT_PATHS", chomp(concatStringsSep(" ", store.printStorePathSet(outputPaths)))); + hookEnvironment.emplace("NIX_CONFIG", globalConfig.toKeyValue()); + + struct LogSink : Sink { + Activity & act; + std::string currentLine; + + LogSink(Activity & act) : act(act) { } + + void operator() (std::string_view data) override { + for (auto c : data) { + if (c == '\n') { + flushLine(); + } else { + currentLine += c; + } + } + } + + void flushLine() { + act.result(resPostBuildLogLine, currentLine); + currentLine.clear(); + } + + ~LogSink() { + if (currentLine != "") { + currentLine += '\n'; + flushLine(); + } + } + }; + LogSink sink(act); + + runProgram2({ + .program = settings.postBuildHook, + .environment = hookEnvironment, + .standardOut = &sink, + .mergeStderrToStdout = true, + }); +} + + +void DerivationBuildingGoal::appendLogTailErrorMsg(std::string & msg) +{ + if (!logger->isVerbose() && !logTail.empty()) { + msg += fmt("\nLast %d log lines:\n", logTail.size()); + for (auto & line : logTail) { + msg += "> "; + msg += line; + msg += "\n"; + } + auto nixLogCommand = "nix log"; + // The command is on a separate line for easy copying, such as with triple click. + // This message will be indented elsewhere, so removing the indentation before the + // command will not put it at the start of the line unfortunately. + msg += fmt("For full logs, run:\n " ANSI_BOLD "%s %s" ANSI_NORMAL, + nixLogCommand, + worker.store.printStorePath(drvPath)); + } +} + + +Goal::Co DerivationBuildingGoal::hookDone() +{ +#ifndef _WIN32 + assert(hook); +#endif + + trace("hook build done"); + + /* Since we got an EOF on the logger pipe, the builder is presumed + to have terminated. In fact, the builder could also have + simply have closed its end of the pipe, so just to be sure, + kill it. */ + int status = +#ifndef _WIN32 // TODO enable build hook on Windows + hook->pid.kill(); +#else + 0; +#endif + + debug("build hook for '%s' finished", worker.store.printStorePath(drvPath)); + + buildResult.timesBuilt++; + buildResult.stopTime = time(0); + + /* So the child is gone now. */ + worker.childTerminated(this); + + /* Close the read side of the logger pipe. */ +#ifndef _WIN32 // TODO enable build hook on Windows + hook->builderOut.readSide.close(); + hook->fromHook.readSide.close(); +#endif + + /* Close the log file. */ + closeLogFile(); + + /* Check the exit status. */ + if (!statusOk(status)) { + auto msg = fmt( + "Cannot build '%s'.\n" + "Reason: " ANSI_RED "builder %s" ANSI_NORMAL ".", + Magenta(worker.store.printStorePath(drvPath)), + statusToString(status)); + + msg += showKnownOutputs(worker.store, *drv); + + appendLogTailErrorMsg(msg); + + outputLocks.unlock(); + + /* TODO (once again) support fine-grained error codes, see issue #12641. */ + + co_return done(BuildResult::MiscFailure, {}, BuildError(msg)); + } + + /* Compute the FS closure of the outputs and register them as + being valid. */ + auto builtOutputs = + /* When using a build hook, the build hook can register the output + as valid (by doing `nix-store --import'). If so we don't have + to do anything here. + + We can only early return when the outputs are known a priori. For + floating content-addressing derivations this isn't the case. + */ + assertPathValidity(); + + StorePathSet outputPaths; + for (auto & [_, output] : builtOutputs) + outputPaths.insert(output.outPath); + runPostBuildHook( + worker.store, + *logger, + drvPath, + outputPaths + ); + + /* It is now safe to delete the lock files, since all future + lockers will see that the output paths are valid; they will + not create new lock files with the same names as the old + (unlinked) lock files. */ + outputLocks.setDeletion(true); + outputLocks.unlock(); + + co_return done(BuildResult::Built, std::move(builtOutputs)); +} + +HookReply DerivationBuildingGoal::tryBuildHook() +{ +#ifdef _WIN32 // TODO enable build hook on Windows + return rpDecline; +#else + /* This should use `worker.evalStore`, but per #13179 the build hook + doesn't work with eval store anyways. */ + if (settings.buildHook.get().empty() || !worker.tryBuildHook || !worker.store.isValidPath(drvPath)) return rpDecline; + + if (!worker.hook) + worker.hook = std::make_unique(); + + try { + + /* Send the request to the hook. */ + worker.hook->sink + << "try" + << (worker.getNrLocalBuilds() < settings.maxBuildJobs ? 1 : 0) + << drv->platform + << worker.store.printStorePath(drvPath) + << drvOptions->getRequiredSystemFeatures(*drv); + worker.hook->sink.flush(); + + /* Read the first line of input, which should be a word indicating + whether the hook wishes to perform the build. */ + std::string reply; + while (true) { + auto s = [&]() { + try { + return readLine(worker.hook->fromHook.readSide.get()); + } catch (Error & e) { + e.addTrace({}, "while reading the response from the build hook"); + throw; + } + }(); + if (handleJSONLogMessage(s, worker.act, worker.hook->activities, "the build hook", true)) + ; + else if (s.substr(0, 2) == "# ") { + reply = s.substr(2); + break; + } + else { + s += "\n"; + writeToStderr(s); + } + } + + debug("hook reply is '%1%'", reply); + + if (reply == "decline") + return rpDecline; + else if (reply == "decline-permanently") { + worker.tryBuildHook = false; + worker.hook = 0; + return rpDecline; + } + else if (reply == "postpone") + return rpPostpone; + else if (reply != "accept") + throw Error("bad hook reply '%s'", reply); + + } catch (SysError & e) { + if (e.errNo == EPIPE) { + printError( + "build hook died unexpectedly: %s", + chomp(drainFD(worker.hook->fromHook.readSide.get()))); + worker.hook = 0; + return rpDecline; + } else + throw; + } + + hook = std::move(worker.hook); + + try { + machineName = readLine(hook->fromHook.readSide.get()); + } catch (Error & e) { + e.addTrace({}, "while reading the machine name from the build hook"); + throw; + } + + CommonProto::WriteConn conn { hook->sink }; + + /* Tell the hook all the inputs that have to be copied to the + remote system. */ + CommonProto::write(worker.store, conn, inputPaths); + + /* Tell the hooks the missing outputs that have to be copied back + from the remote system. */ + { + StringSet missingOutputs; + for (auto & [outputName, status] : initialOutputs) { + // XXX: Does this include known CA outputs? + if (buildMode != bmCheck && status.known && status.known->isValid()) continue; + missingOutputs.insert(outputName); + } + CommonProto::write(worker.store, conn, missingOutputs); + } + + hook->sink = FdSink(); + hook->toHook.writeSide.close(); + + /* Create the log file and pipe. */ + [[maybe_unused]] Path logFile = openLogFile(); + + std::set fds; + fds.insert(hook->fromHook.readSide.get()); + fds.insert(hook->builderOut.readSide.get()); + worker.childStarted(shared_from_this(), fds, false, false); + + return rpAccept; +#endif +} + + +Path DerivationBuildingGoal::openLogFile() +{ + logSize = 0; + + if (!settings.keepLog) return ""; + + auto baseName = std::string(baseNameOf(worker.store.printStorePath(drvPath))); + + /* Create a log file. */ + Path logDir; + if (auto localStore = dynamic_cast(&worker.store)) + logDir = localStore->config->logDir; + else + logDir = settings.nixLogDir; + Path dir = fmt("%s/%s/%s/", logDir, LocalFSStore::drvsLogDir, baseName.substr(0, 2)); + createDirs(dir); + + Path logFileName = fmt("%s/%s%s", dir, baseName.substr(2), + settings.compressLog ? ".bz2" : ""); + + fdLogFile = toDescriptor(open(logFileName.c_str(), O_CREAT | O_WRONLY | O_TRUNC +#ifndef _WIN32 + | O_CLOEXEC +#endif + , 0666)); + if (!fdLogFile) throw SysError("creating log file '%1%'", logFileName); + + logFileSink = std::make_shared(fdLogFile.get()); + + if (settings.compressLog) + logSink = std::shared_ptr(makeCompressionSink("bzip2", *logFileSink)); + else + logSink = logFileSink; + + return logFileName; +} + + +void DerivationBuildingGoal::closeLogFile() +{ + auto logSink2 = std::dynamic_pointer_cast(logSink); + if (logSink2) logSink2->finish(); + if (logFileSink) logFileSink->flush(); + logSink = logFileSink = 0; + fdLogFile.close(); +} + + +bool DerivationBuildingGoal::isReadDesc(Descriptor fd) +{ +#ifdef _WIN32 // TODO enable build hook on Windows + return false; +#else + return + (hook && fd == hook->builderOut.readSide.get()) + || + (builder && fd == builder->builderOut.get()); +#endif +} + +void DerivationBuildingGoal::handleChildOutput(Descriptor fd, std::string_view data) +{ + // local & `ssh://`-builds are dealt with here. + auto isWrittenToLog = isReadDesc(fd); + if (isWrittenToLog) + { + logSize += data.size(); + if (settings.maxLogSize && logSize > settings.maxLogSize) { + killChild(); + // We're not inside a coroutine, hence we can't use co_return here. + // Thus we ignore the return value. + [[maybe_unused]] Done _ = done( + BuildResult::LogLimitExceeded, {}, + Error("%s killed after writing more than %d bytes of log output", + getName(), settings.maxLogSize)); + return; + } + + for (auto c : data) + if (c == '\r') + currentLogLinePos = 0; + else if (c == '\n') + flushLine(); + else { + if (currentLogLinePos >= currentLogLine.size()) + currentLogLine.resize(currentLogLinePos + 1); + currentLogLine[currentLogLinePos++] = c; + } + + if (logSink) (*logSink)(data); + } + +#ifndef _WIN32 // TODO enable build hook on Windows + if (hook && fd == hook->fromHook.readSide.get()) { + for (auto c : data) + if (c == '\n') { + auto json = parseJSONMessage(currentHookLine, "the derivation builder"); + if (json) { + auto s = handleJSONLogMessage(*json, worker.act, hook->activities, "the derivation builder", true); + // ensure that logs from a builder using `ssh-ng://` as protocol + // are also available to `nix log`. + if (s && !isWrittenToLog && logSink) { + const auto type = (*json)["type"]; + const auto fields = (*json)["fields"]; + if (type == resBuildLogLine) { + (*logSink)((fields.size() > 0 ? fields[0].get() : "") + "\n"); + } else if (type == resSetPhase && ! fields.is_null()) { + const auto phase = fields[0]; + if (! phase.is_null()) { + // nixpkgs' stdenv produces lines in the log to signal + // phase changes. + // We want to get the same lines in case of remote builds. + // The format is: + // @nix { "action": "setPhase", "phase": "$curPhase" } + const auto logLine = nlohmann::json::object({ + {"action", "setPhase"}, + {"phase", phase} + }); + (*logSink)("@nix " + logLine.dump(-1, ' ', false, nlohmann::json::error_handler_t::replace) + "\n"); + } + } + } + } + currentHookLine.clear(); + } else + currentHookLine += c; + } +#endif +} + + +void DerivationBuildingGoal::handleEOF(Descriptor fd) +{ + if (!currentLogLine.empty()) flushLine(); + worker.wakeUp(shared_from_this()); +} + + +void DerivationBuildingGoal::flushLine() +{ + if (handleJSONLogMessage(currentLogLine, *act, builderActivities, "the derivation builder", false)) + ; + + else { + logTail.push_back(currentLogLine); + if (logTail.size() > settings.logLines) logTail.pop_front(); + + act->result(resBuildLogLine, currentLogLine); + } + + currentLogLine = ""; + currentLogLinePos = 0; +} + + +std::map> DerivationBuildingGoal::queryPartialDerivationOutputMap() +{ + assert(!drv->type().isImpure()); + + for (auto * drvStore : { &worker.evalStore, &worker.store }) + if (drvStore->isValidPath(drvPath)) + return worker.store.queryPartialDerivationOutputMap(drvPath, drvStore); + + /* In-memory derivation will naturally fall back on this case, where + we do best-effort with static information. */ + std::map> res; + for (auto & [name, output] : drv->outputs) + res.insert_or_assign(name, output.path(worker.store, drv->name, name)); + return res; +} + +std::pair DerivationBuildingGoal::checkPathValidity() +{ + if (drv->type().isImpure()) return { false, {} }; + + bool checkHash = buildMode == bmRepair; + SingleDrvOutputs validOutputs; + + for (auto & i : queryPartialDerivationOutputMap()) { + auto initialOutput = get(initialOutputs, i.first); + if (!initialOutput) + // this is an invalid output, gets caught with (!wantedOutputsLeft.empty()) + continue; + auto & info = *initialOutput; + info.wanted = true; + if (i.second) { + auto outputPath = *i.second; + info.known = { + .path = outputPath, + .status = !worker.store.isValidPath(outputPath) + ? PathStatus::Absent + : !checkHash || worker.pathContentsGood(outputPath) + ? PathStatus::Valid + : PathStatus::Corrupt, + }; + } + auto drvOutput = DrvOutput{info.outputHash, i.first}; + if (experimentalFeatureSettings.isEnabled(Xp::CaDerivations)) { + if (auto real = worker.store.queryRealisation(drvOutput)) { + info.known = { + .path = real->outPath, + .status = PathStatus::Valid, + }; + } else if (info.known && info.known->isValid()) { + // We know the output because it's a static output of the + // derivation, and the output path is valid, but we don't have + // its realisation stored (probably because it has been built + // without the `ca-derivations` experimental flag). + worker.store.registerDrvOutput( + Realisation { + drvOutput, + info.known->path, + } + ); + } + } + if (info.known && info.known->isValid()) + validOutputs.emplace(i.first, Realisation { drvOutput, info.known->path }); + } + + bool allValid = true; + for (auto & [_, status] : initialOutputs) { + if (!status.wanted) continue; + if (!status.known || !status.known->isValid()) { + allValid = false; + break; + } + } + + return { allValid, validOutputs }; +} + + +SingleDrvOutputs DerivationBuildingGoal::assertPathValidity() +{ + auto [allValid, validOutputs] = checkPathValidity(); + if (!allValid) + throw Error("some outputs are unexpectedly invalid"); + return validOutputs; +} + + +Goal::Done DerivationBuildingGoal::done( + BuildResult::Status status, + SingleDrvOutputs builtOutputs, + std::optional ex) +{ + outputLocks.unlock(); + buildResult.status = status; + if (ex) + buildResult.errorMsg = fmt("%s", Uncolored(ex->info().msg)); + if (buildResult.status == BuildResult::TimedOut) + worker.timedOut = true; + if (buildResult.status == BuildResult::PermanentFailure) + worker.permanentFailure = true; + + mcRunningBuilds.reset(); + + if (buildResult.success()) { + buildResult.builtOutputs = std::move(builtOutputs); + if (status == BuildResult::Built) + worker.doneBuilds++; + } else { + if (status != BuildResult::DependencyFailed) + worker.failedBuilds++; + } + + worker.updateProgress(); + + auto traceBuiltOutputsFile = getEnv("_NIX_TRACE_BUILT_OUTPUTS").value_or(""); + if (traceBuiltOutputsFile != "") { + std::fstream fs; + fs.open(traceBuiltOutputsFile, std::fstream::out); + fs << worker.store.printStorePath(drvPath) << "\t" << buildResult.toString() << std::endl; + } + + logger->result( + act ? act->id : getCurActivity(), + resBuildResult, + nlohmann::json( + KeyedBuildResult( + buildResult, + DerivedPath::Built{.drvPath = makeConstantStorePathRef(drvPath), .outputs = OutputsSpec::All{}}))); + + return amDone(buildResult.success() ? ecSuccess : ecFailed, std::move(ex)); +} + +} diff --git a/src/libstore/build/derivation-goal.cc b/src/libstore/build/derivation-goal.cc index 850d21bca..9d0ec21ba 100644 --- a/src/libstore/build/derivation-goal.cc +++ b/src/libstore/build/derivation-goal.cc @@ -1,4 +1,5 @@ #include "nix/store/build/derivation-goal.hh" +#include "nix/store/build/derivation-building-goal.hh" #ifndef _WIN32 // TODO enable build hook on Windows # include "nix/store/build/hook-instance.hh" # include "nix/store/build/derivation-builder.hh" @@ -9,7 +10,7 @@ #include "nix/util/util.hh" #include "nix/util/compression.hh" #include "nix/store/common-protocol.hh" -#include "nix/store/common-protocol-impl.hh" +#include "nix/store/common-protocol-impl.hh" // Don't remove is actually needed #include "nix/store/local-store.hh" // TODO remove, along with remaining downcasts #include @@ -23,17 +24,16 @@ namespace nix { -DerivationGoal::DerivationGoal(const StorePath & drvPath, +DerivationGoal::DerivationGoal(ref drvReq, const OutputsSpec & wantedOutputs, Worker & worker, BuildMode buildMode) - : Goal(worker) - , useDerivation(true) - , drvPath(drvPath) + : Goal(worker, loadDerivation()) + , drvReq(drvReq) , wantedOutputs(wantedOutputs) , buildMode(buildMode) { name = fmt( "building of '%s' from .drv file", - DerivedPath::Built { makeConstantStorePathRef(drvPath), wantedOutputs }.to_string(worker.store)); + DerivedPath::Built { drvReq, wantedOutputs }.to_string(worker.store)); trace("created"); mcExpectedBuilds = std::make_unique>(worker.expectedBuilds); @@ -43,9 +43,8 @@ DerivationGoal::DerivationGoal(const StorePath & drvPath, DerivationGoal::DerivationGoal(const StorePath & drvPath, const BasicDerivation & drv, const OutputsSpec & wantedOutputs, Worker & worker, BuildMode buildMode) - : Goal(worker) - , useDerivation(false) - , drvPath(drvPath) + : Goal(worker, haveDerivation(drvPath)) + , drvReq(makeConstantStorePathRef(drvPath)) , wantedOutputs(wantedOutputs) , buildMode(buildMode) { @@ -53,30 +52,23 @@ DerivationGoal::DerivationGoal(const StorePath & drvPath, const BasicDerivation name = fmt( "building of '%s' from in-memory derivation", - DerivedPath::Built { makeConstantStorePathRef(drvPath), drv.outputNames() }.to_string(worker.store)); + DerivedPath::Built { drvReq, drv.outputNames() }.to_string(worker.store)); trace("created"); mcExpectedBuilds = std::make_unique>(worker.expectedBuilds); worker.updateProgress(); - /* Prevent the .chroot directory from being - garbage-collected. (See isActiveTempFile() in gc.cc.) */ - worker.store.addTempRoot(this->drvPath); } -DerivationGoal::~DerivationGoal() +static StorePath pathPartOfReq(const SingleDerivedPath & req) { - /* Careful: we should never ever throw an exception from a - destructor. */ - try { killChild(); } catch (...) { ignoreExceptionInDestructor(); } -#ifndef _WIN32 // TODO enable `DerivationBuilder` on Windows - if (builder) { - try { builder->stopDaemon(); } catch (...) { ignoreExceptionInDestructor(); } - try { builder->deleteTmpDir(false); } catch (...) { ignoreExceptionInDestructor(); } - } -#endif - try { closeLogFile(); } catch (...) { ignoreExceptionInDestructor(); } + return std::visit( + overloaded{ + [&](const SingleDerivedPath::Opaque & bo) { return bo.path; }, + [&](const SingleDerivedPath::Built & bfd) { return pathPartOfReq(*bfd.drvPath); }, + }, + req.raw()); } @@ -86,47 +78,15 @@ std::string DerivationGoal::key() i.e. a derivation named "aardvark" always comes before "baboon". And substitution goals always happen before derivation goals (due to "b$"). */ - return "b$" + std::string(drvPath.name()) + "$" + worker.store.printStorePath(drvPath); + return "b$" + std::string(pathPartOfReq(*drvReq).name()) + "$" + drvReq->to_string(worker.store); } -void DerivationGoal::killChild() -{ -#ifndef _WIN32 // TODO enable build hook on Windows - hook.reset(); -#endif -#ifndef _WIN32 // TODO enable `DerivationBuilder` on Windows - if (builder && builder->pid != -1) { - worker.childTerminated(this); - - /* If we're using a build user, then there is a tricky race - condition: if we kill the build user before the child has - done its setuid() to the build user uid, then it won't be - killed, and we'll potentially lock up in pid.wait(). So - also send a conventional kill to the child. */ - ::kill(-builder->pid, SIGKILL); /* ignore the result */ - - builder->killSandbox(true); - - builder->pid.wait(); - } -#endif -} - - -void DerivationGoal::timedOut(Error && ex) -{ - killChild(); - // We're not inside a coroutine, hence we can't use co_return here. - // Thus we ignore the return value. - [[maybe_unused]] Done _ = done(BuildResult::TimedOut, {}, std::move(ex)); -} - void DerivationGoal::addWantedOutputs(const OutputsSpec & outputs) { auto newWanted = wantedOutputs.union_(outputs); switch (needRestart) { - case NeedRestartForMoreOutputs::OutputsUnmodifedDontNeed: + case NeedRestartForMoreOutputs::OutputsUnmodifiedDontNeed: if (!newWanted.isSubsetOf(wantedOutputs)) needRestart = NeedRestartForMoreOutputs::OutputsAddedDoNeed; break; @@ -143,25 +103,47 @@ void DerivationGoal::addWantedOutputs(const OutputsSpec & outputs) } -Goal::Co DerivationGoal::init() { - trace("init"); +Goal::Co DerivationGoal::loadDerivation() { + trace("need to load derivation from file"); - if (useDerivation) { + { /* The first thing to do is to make sure that the derivation - exists. If it doesn't, it may be created through a - substitute. */ + exists. If it doesn't, it may be built from another + derivation, or merely substituted. We can make goal to get it + and not worry about which method it takes to get the + derivation. */ - if (buildMode != bmNormal || !worker.evalStore.isValidPath(drvPath)) { - Goals waitees{upcast_goal(worker.makePathSubstitutionGoal(drvPath))}; + if (auto optDrvPath = [this]() -> std::optional { + if (buildMode != bmNormal) + return std::nullopt; + + auto drvPath = StorePath::dummy; + try { + drvPath = resolveDerivedPath(worker.store, *drvReq); + } catch (MissingRealisation &) { + return std::nullopt; + } + auto cond = worker.evalStore.isValidPath(drvPath) || worker.store.isValidPath(drvPath); + return cond ? std::optional{drvPath} : std::nullopt; + }()) { + trace( + fmt("already have drv '%s' for '%s', can go straight to building", + worker.store.printStorePath(*optDrvPath), + drvReq->to_string(worker.store))); + } else { + trace("need to obtain drv we want to build"); + Goals waitees{worker.makeGoal(DerivedPath::fromSingle(*drvReq))}; co_await await(std::move(waitees)); } trace("loading derivation"); if (nrFailed != 0) { - co_return done(BuildResult::MiscFailure, {}, Error("cannot build missing derivation '%s'", worker.store.printStorePath(drvPath))); + co_return amDone(ecFailed, Error("cannot build missing derivation '%s'", drvReq->to_string(worker.store))); } + StorePath drvPath = resolveDerivedPath(worker.store, *drvReq); + /* `drvPath' should already be a root, but let's be on the safe side: if the user forgot to make it a root, we wouldn't want things being garbage collected while we're busy. */ @@ -180,30 +162,64 @@ Goal::Co DerivationGoal::init() { } } assert(drv); - } - co_return haveDerivation(); + co_return haveDerivation(drvPath); + } } -Goal::Co DerivationGoal::haveDerivation() +Goal::Co DerivationGoal::haveDerivation(StorePath drvPath) { trace("have derivation"); - if (auto parsedOpt = StructuredAttrs::tryParse(drv->env)) { - parsedDrv = std::make_unique(*parsedOpt); - } - try { - drvOptions = std::make_unique( - DerivationOptions::fromStructuredAttrs(drv->env, parsedDrv.get())); - } catch (Error & e) { - e.addTrace({}, "while parsing derivation '%s'", worker.store.printStorePath(drvPath)); - throw; - } + auto drvOptions = [&]() -> DerivationOptions { + auto parsedOpt = StructuredAttrs::tryParse(drv->env); + try { + return DerivationOptions::fromStructuredAttrs(drv->env, parsedOpt ? &*parsedOpt : nullptr); + } catch (Error & e) { + e.addTrace({}, "while parsing derivation '%s'", worker.store.printStorePath(drvPath)); + throw; + } + }(); if (!drv->type().hasKnownOutputPaths()) experimentalFeatureSettings.require(Xp::CaDerivations); + /* At least one of the output paths could not be + produced using a substitute. So we have to build instead. */ + auto gaveUpOnSubstitution = [&]() -> Goal::Co + { + auto g = worker.makeDerivationBuildingGoal(drvPath, *drv, buildMode); + + /* We will finish with it ourselves, as if we were the derivational goal. */ + g->preserveException = true; + + // TODO move into constructor + g->initialOutputs = initialOutputs; + + { + Goals waitees; + waitees.insert(g); + co_await await(std::move(waitees)); + } + + trace("outer build done"); + + buildResult = g->getBuildResult(DerivedPath::Built{ + .drvPath = makeConstantStorePathRef(drvPath), + .outputs = wantedOutputs, + }); + + if (buildMode == bmCheck) { + /* In checking mode, the builder will not register any outputs. + So we want to make sure the ones that we wanted to check are + properly there. */ + buildResult.builtOutputs = assertPathValidity(drvPath); + } + + co_return amDone(g->exitCode, g->ex); + }; + for (auto & i : drv->outputsAndOptPaths(worker.store)) if (i.second.second) worker.store.addTempRoot(*i.second.second); @@ -245,11 +261,11 @@ Goal::Co DerivationGoal::haveDerivation() { /* Check what outputs paths are not already valid. */ - auto [allValid, validOutputs] = checkPathValidity(); + auto [allValid, validOutputs] = checkPathValidity(drvPath); /* If they are all valid, then we're done. */ if (allValid && buildMode == bmNormal) { - co_return done(BuildResult::AlreadyValid, std::move(validOutputs)); + co_return done(drvPath, BuildResult::AlreadyValid, std::move(validOutputs)); } } @@ -258,7 +274,7 @@ Goal::Co DerivationGoal::haveDerivation() /* We are first going to try to create the invalid output paths through substitutes. If that doesn't work, we'll build them. */ - if (settings.useSubstitutes && drvOptions->substitutesAllowed()) + if (settings.useSubstitutes && drvOptions.substitutesAllowed()) for (auto & [outputName, status] : initialOutputs) { if (!status.wanted) continue; if (!status.known) @@ -285,53 +301,26 @@ Goal::Co DerivationGoal::haveDerivation() assert(!drv->type().isImpure()); - if (nrFailed > 0 && nrFailed > nrNoSubstituters + nrIncompleteClosure && !settings.tryFallback) { - co_return done(BuildResult::TransientFailure, {}, + if (nrFailed > 0 && nrFailed > nrNoSubstituters && !settings.tryFallback) { + co_return done(drvPath, BuildResult::TransientFailure, {}, Error("some substitutes for the outputs of derivation '%s' failed (usually happens due to networking issues); try '--fallback' to build derivation from source ", worker.store.printStorePath(drvPath))); } - /* If the substitutes form an incomplete closure, then we should - build the dependencies of this derivation, but after that, we - can still use the substitutes for this derivation itself. - - If the nrIncompleteClosure != nrFailed, we have another issue as well. - In particular, it may be the case that the hole in the closure is - an output of the current derivation, which causes a loop if retried. - */ - { - bool substitutionFailed = - nrIncompleteClosure > 0 && - nrIncompleteClosure == nrFailed; - switch (retrySubstitution) { - case RetrySubstitution::NoNeed: - if (substitutionFailed) - retrySubstitution = RetrySubstitution::YesNeed; - break; - case RetrySubstitution::YesNeed: - // Should not be able to reach this state from here. - assert(false); - break; - case RetrySubstitution::AlreadyRetried: - debug("substitution failed again, but we already retried once. Not retrying again."); - break; - } - } - - nrFailed = nrNoSubstituters = nrIncompleteClosure = 0; + nrFailed = nrNoSubstituters = 0; if (needRestart == NeedRestartForMoreOutputs::OutputsAddedDoNeed) { - needRestart = NeedRestartForMoreOutputs::OutputsUnmodifedDontNeed; - co_return haveDerivation(); + needRestart = NeedRestartForMoreOutputs::OutputsUnmodifiedDontNeed; + co_return haveDerivation(std::move(drvPath)); } - auto [allValid, validOutputs] = checkPathValidity(); + auto [allValid, validOutputs] = checkPathValidity(drvPath); if (buildMode == bmNormal && allValid) { - co_return done(BuildResult::Substituted, std::move(validOutputs)); + co_return done(drvPath, BuildResult::Substituted, std::move(validOutputs)); } if (buildMode == bmRepair && allValid) { - co_return repairClosure(); + co_return repairClosure(std::move(drvPath)); } if (buildMode == bmCheck && !allValid) throw Error("some outputs of '%s' are not valid, so checking is not possible", @@ -354,579 +343,7 @@ struct value_comparison }; -std::string showKnownOutputs(Store & store, const Derivation & drv) -{ - std::string msg; - StorePathSet expectedOutputPaths; - for (auto & i : drv.outputsAndOptPaths(store)) - if (i.second.second) - expectedOutputPaths.insert(*i.second.second); - if (!expectedOutputPaths.empty()) { - msg += "\nOutput paths:"; - for (auto & p : expectedOutputPaths) - msg += fmt("\n %s", Magenta(store.printStorePath(p))); - } - return msg; -} - - -/* At least one of the output paths could not be - produced using a substitute. So we have to build instead. */ -Goal::Co DerivationGoal::gaveUpOnSubstitution() -{ - /* At this point we are building all outputs, so if more are wanted there - is no need to restart. */ - needRestart = NeedRestartForMoreOutputs::BuildInProgressWillNotNeed; - - Goals waitees; - - std::map, GoalPtr, value_comparison> inputGoals; - - if (useDerivation) { - std::function, const DerivedPathMap::ChildNode &)> addWaiteeDerivedPath; - - addWaiteeDerivedPath = [&](ref inputDrv, const DerivedPathMap::ChildNode & inputNode) { - if (!inputNode.value.empty()) { - auto g = worker.makeGoal( - DerivedPath::Built { - .drvPath = inputDrv, - .outputs = inputNode.value, - }, - buildMode == bmRepair ? bmRepair : bmNormal); - inputGoals.insert_or_assign(inputDrv, g); - waitees.insert(std::move(g)); - } - for (const auto & [outputName, childNode] : inputNode.childMap) - addWaiteeDerivedPath( - make_ref(SingleDerivedPath::Built { inputDrv, outputName }), - childNode); - }; - - for (const auto & [inputDrvPath, inputNode] : dynamic_cast(drv.get())->inputDrvs.map) { - /* Ensure that pure, non-fixed-output derivations don't - depend on impure derivations. */ - if (experimentalFeatureSettings.isEnabled(Xp::ImpureDerivations) && !drv->type().isImpure() && !drv->type().isFixed()) { - auto inputDrv = worker.evalStore.readDerivation(inputDrvPath); - if (inputDrv.type().isImpure()) - throw Error("pure derivation '%s' depends on impure derivation '%s'", - worker.store.printStorePath(drvPath), - worker.store.printStorePath(inputDrvPath)); - } - - addWaiteeDerivedPath(makeConstantStorePathRef(inputDrvPath), inputNode); - } - } - - /* Copy the input sources from the eval store to the build - store. - - Note that some inputs might not be in the eval store because they - are (resolved) derivation outputs in a resolved derivation. */ - if (&worker.evalStore != &worker.store) { - RealisedPath::Set inputSrcs; - for (auto & i : drv->inputSrcs) - if (worker.evalStore.isValidPath(i)) - inputSrcs.insert(i); - copyClosure(worker.evalStore, worker.store, inputSrcs); - } - - for (auto & i : drv->inputSrcs) { - if (worker.store.isValidPath(i)) continue; - if (!settings.useSubstitutes) - throw Error("dependency '%s' of '%s' does not exist, and substitution is disabled", - worker.store.printStorePath(i), worker.store.printStorePath(drvPath)); - waitees.insert(upcast_goal(worker.makePathSubstitutionGoal(i))); - } - - co_await await(std::move(waitees)); - - - trace("all inputs realised"); - - if (nrFailed != 0) { - if (!useDerivation) - throw Error("some dependencies of '%s' are missing", worker.store.printStorePath(drvPath)); - auto msg = fmt( - "Cannot build '%s'.\n" - "Reason: " ANSI_RED "%d %s failed" ANSI_NORMAL ".", - Magenta(worker.store.printStorePath(drvPath)), - nrFailed, - nrFailed == 1 ? "dependency" : "dependencies"); - msg += showKnownOutputs(worker.store, *drv); - co_return done(BuildResult::DependencyFailed, {}, Error(msg)); - } - - if (retrySubstitution == RetrySubstitution::YesNeed) { - retrySubstitution = RetrySubstitution::AlreadyRetried; - co_return haveDerivation(); - } - - /* Gather information necessary for computing the closure and/or - running the build hook. */ - - /* Determine the full set of input paths. */ - - /* First, the input derivations. */ - if (useDerivation) { - auto & fullDrv = *dynamic_cast(drv.get()); - - auto drvType = fullDrv.type(); - bool resolveDrv = std::visit(overloaded { - [&](const DerivationType::InputAddressed & ia) { - /* must resolve if deferred. */ - return ia.deferred; - }, - [&](const DerivationType::ContentAddressed & ca) { - return !fullDrv.inputDrvs.map.empty() && ( - ca.fixed - /* Can optionally resolve if fixed, which is good - for avoiding unnecessary rebuilds. */ - ? experimentalFeatureSettings.isEnabled(Xp::CaDerivations) - /* Must resolve if floating and there are any inputs - drvs. */ - : true); - }, - [&](const DerivationType::Impure &) { - return true; - } - }, drvType.raw) - /* no inputs are outputs of dynamic derivations */ - || std::ranges::any_of( - fullDrv.inputDrvs.map.begin(), - fullDrv.inputDrvs.map.end(), - [](auto & pair) { return !pair.second.childMap.empty(); }); - - if (resolveDrv && !fullDrv.inputDrvs.map.empty()) { - experimentalFeatureSettings.require(Xp::CaDerivations); - - /* We are be able to resolve this derivation based on the - now-known results of dependencies. If so, we become a - stub goal aliasing that resolved derivation goal. */ - std::optional attempt = fullDrv.tryResolve(worker.store, - [&](ref drvPath, const std::string & outputName) -> std::optional { - auto mEntry = get(inputGoals, drvPath); - if (!mEntry) return std::nullopt; - - auto buildResult = (*mEntry)->getBuildResult(DerivedPath::Built{drvPath, OutputsSpec::Names{outputName}}); - if (!buildResult.success()) return std::nullopt; - - auto i = get(buildResult.builtOutputs, outputName); - if (!i) return std::nullopt; - - return i->outPath; - }); - if (!attempt) { - /* TODO (impure derivations-induced tech debt) (see below): - The above attempt should have found it, but because we manage - inputDrvOutputs statefully, sometimes it gets out of sync with - the real source of truth (store). So we query the store - directly if there's a problem. */ - attempt = fullDrv.tryResolve(worker.store, &worker.evalStore); - } - assert(attempt); - Derivation drvResolved { std::move(*attempt) }; - - auto pathResolved = writeDerivation(worker.store, drvResolved); - - auto msg = fmt("resolved derivation: '%s' -> '%s'", - worker.store.printStorePath(drvPath), - worker.store.printStorePath(pathResolved)); - act = std::make_unique(*logger, lvlInfo, actBuildWaiting, msg, - Logger::Fields { - worker.store.printStorePath(drvPath), - worker.store.printStorePath(pathResolved), - }); - - auto resolvedDrvGoal = worker.makeDerivationGoal( - pathResolved, wantedOutputs, buildMode); - { - Goals waitees{resolvedDrvGoal}; - co_await await(std::move(waitees)); - } - - trace("resolved derivation finished"); - - auto resolvedDrv = *resolvedDrvGoal->drv; - auto & resolvedResult = resolvedDrvGoal->buildResult; - - SingleDrvOutputs builtOutputs; - - if (resolvedResult.success()) { - auto resolvedHashes = staticOutputHashes(worker.store, resolvedDrv); - - StorePathSet outputPaths; - - for (auto & outputName : resolvedDrv.outputNames()) { - auto initialOutput = get(initialOutputs, outputName); - auto resolvedHash = get(resolvedHashes, outputName); - if ((!initialOutput) || (!resolvedHash)) - throw Error( - "derivation '%s' doesn't have expected output '%s' (derivation-goal.cc/resolve)", - worker.store.printStorePath(drvPath), outputName); - - auto realisation = [&]{ - auto take1 = get(resolvedResult.builtOutputs, outputName); - if (take1) return *take1; - - /* The above `get` should work. But sateful tracking of - outputs in resolvedResult, this can get out of sync with the - store, which is our actual source of truth. For now we just - check the store directly if it fails. */ - auto take2 = worker.evalStore.queryRealisation(DrvOutput { *resolvedHash, outputName }); - if (take2) return *take2; - - throw Error( - "derivation '%s' doesn't have expected output '%s' (derivation-goal.cc/realisation)", - worker.store.printStorePath(resolvedDrvGoal->drvPath), outputName); - }(); - - if (!drv->type().isImpure()) { - auto newRealisation = realisation; - newRealisation.id = DrvOutput { initialOutput->outputHash, outputName }; - newRealisation.signatures.clear(); - if (!drv->type().isFixed()) { - auto & drvStore = worker.evalStore.isValidPath(drvPath) - ? worker.evalStore - : worker.store; - newRealisation.dependentRealisations = drvOutputReferences(worker.store, *drv, realisation.outPath, &drvStore); - } - worker.store.signRealisation(newRealisation); - worker.store.registerDrvOutput(newRealisation); - } - outputPaths.insert(realisation.outPath); - builtOutputs.emplace(outputName, realisation); - } - - runPostBuildHook( - worker.store, - *logger, - drvPath, - outputPaths - ); - } - - auto status = resolvedResult.status; - if (status == BuildResult::AlreadyValid) - status = BuildResult::ResolvesToAlreadyValid; - - co_return done(status, std::move(builtOutputs)); - } - - /* If we get this far, we know no dynamic drvs inputs */ - - for (auto & [depDrvPath, depNode] : fullDrv.inputDrvs.map) { - for (auto & outputName : depNode.value) { - /* Don't need to worry about `inputGoals`, because - impure derivations are always resolved above. Can - just use DB. This case only happens in the (older) - input addressed and fixed output derivation cases. */ - auto outMap = [&]{ - for (auto * drvStore : { &worker.evalStore, &worker.store }) - if (drvStore->isValidPath(depDrvPath)) - return worker.store.queryDerivationOutputMap(depDrvPath, drvStore); - assert(false); - }(); - - auto outMapPath = outMap.find(outputName); - if (outMapPath == outMap.end()) { - throw Error( - "derivation '%s' requires non-existent output '%s' from input derivation '%s'", - worker.store.printStorePath(drvPath), outputName, worker.store.printStorePath(depDrvPath)); - } - - worker.store.computeFSClosure(outMapPath->second, inputPaths); - } - } - } - - /* Second, the input sources. */ - worker.store.computeFSClosure(drv->inputSrcs, inputPaths); - - debug("added input paths %s", worker.store.showPaths(inputPaths)); - - /* Okay, try to build. Note that here we don't wait for a build - slot to become available, since we don't need one if there is a - build hook. */ - co_await yield(); - co_return tryToBuild(); -} - -void DerivationGoal::started() -{ - auto msg = fmt( - buildMode == bmRepair ? "repairing outputs of '%s'" : - buildMode == bmCheck ? "checking outputs of '%s'" : - "building '%s'", worker.store.printStorePath(drvPath)); - fmt("building '%s'", worker.store.printStorePath(drvPath)); -#ifndef _WIN32 // TODO enable build hook on Windows - if (hook) msg += fmt(" on '%s'", machineName); -#endif - act = std::make_unique(*logger, lvlInfo, actBuild, msg, - Logger::Fields{worker.store.printStorePath(drvPath), -#ifndef _WIN32 // TODO enable build hook on Windows - hook ? machineName : -#endif - "", - 1, - 1}); - mcRunningBuilds = std::make_unique>(worker.runningBuilds); - worker.updateProgress(); -} - -Goal::Co DerivationGoal::tryToBuild() -{ - trace("trying to build"); - - /* Obtain locks on all output paths, if the paths are known a priori. - - The locks are automatically released when we exit this function or Nix - crashes. If we can't acquire the lock, then continue; hopefully some - other goal can start a build, and if not, the main loop will sleep a few - seconds and then retry this goal. */ - PathSet lockFiles; - /* FIXME: Should lock something like the drv itself so we don't build same - CA drv concurrently */ - if (dynamic_cast(&worker.store)) { - /* If we aren't a local store, we might need to use the local store as - a build remote, but that would cause a deadlock. */ - /* FIXME: Make it so we can use ourselves as a build remote even if we - are the local store (separate locking for building vs scheduling? */ - /* FIXME: find some way to lock for scheduling for the other stores so - a forking daemon with --store still won't farm out redundant builds. - */ - for (auto & i : drv->outputsAndOptPaths(worker.store)) { - if (i.second.second) - lockFiles.insert(worker.store.Store::toRealPath(*i.second.second)); - else - lockFiles.insert( - worker.store.Store::toRealPath(drvPath) + "." + i.first - ); - } - } - - if (!outputLocks.lockPaths(lockFiles, "", false)) - { - Activity act(*logger, lvlWarn, actBuildWaiting, - fmt("waiting for lock on %s", Magenta(showPaths(lockFiles)))); - - /* Wait then try locking again, repeat until success (returned - boolean is true). */ - do { - co_await waitForAWhile(); - } while (!outputLocks.lockPaths(lockFiles, "", false)); - } - - /* Now check again whether the outputs are valid. This is because - another process may have started building in parallel. After - it has finished and released the locks, we can (and should) - reuse its results. (Strictly speaking the first check can be - omitted, but that would be less efficient.) Note that since we - now hold the locks on the output paths, no other process can - build this derivation, so no further checks are necessary. */ - auto [allValid, validOutputs] = checkPathValidity(); - - if (buildMode != bmCheck && allValid) { - debug("skipping build of derivation '%s', someone beat us to it", worker.store.printStorePath(drvPath)); - outputLocks.setDeletion(true); - outputLocks.unlock(); - co_return done(BuildResult::AlreadyValid, std::move(validOutputs)); - } - - /* If any of the outputs already exist but are not valid, delete - them. */ - for (auto & [_, status] : initialOutputs) { - if (!status.known || status.known->isValid()) continue; - auto storePath = status.known->path; - debug("removing invalid path '%s'", worker.store.printStorePath(status.known->path)); - deletePath(worker.store.Store::toRealPath(storePath)); - } - - /* Don't do a remote build if the derivation has the attribute - `preferLocalBuild' set. Also, check and repair modes are only - supported for local builds. */ - bool buildLocally = - (buildMode != bmNormal || drvOptions->willBuildLocally(worker.store, *drv)) - && settings.maxBuildJobs.get() != 0; - - if (!buildLocally) { - switch (tryBuildHook()) { - case rpAccept: - /* Yes, it has started doing so. Wait until we get - EOF from the hook. */ - actLock.reset(); - buildResult.startTime = time(0); // inexact - started(); - co_await Suspend{}; - co_return hookDone(); - case rpPostpone: - /* Not now; wait until at least one child finishes or - the wake-up timeout expires. */ - if (!actLock) - actLock = std::make_unique(*logger, lvlWarn, actBuildWaiting, - fmt("waiting for a machine to build '%s'", Magenta(worker.store.printStorePath(drvPath)))); - outputLocks.unlock(); - co_await waitForAWhile(); - co_return tryToBuild(); - case rpDecline: - /* We should do it ourselves. */ - break; - } - } - - actLock.reset(); - - co_await yield(); - - if (!dynamic_cast(&worker.store)) { - throw Error( - R"( - Unable to build with a primary store that isn't a local store; - either pass a different '--store' or enable remote builds. - - For more information check 'man nix.conf' and search for '/machines'. - )" - ); - } - -#ifdef _WIN32 // TODO enable `DerivationBuilder` on Windows - throw UnimplementedError("building derivations is not yet implemented on Windows"); -#else - - // Will continue here while waiting for a build user below - while (true) { - - assert(!hook); - - unsigned int curBuilds = worker.getNrLocalBuilds(); - if (curBuilds >= settings.maxBuildJobs) { - outputLocks.unlock(); - co_await waitForBuildSlot(); - co_return tryToBuild(); - } - - if (!builder) { - /** - * Local implementation of these virtual methods, consider - * this just a record of lambdas. - */ - struct DerivationGoalCallbacks : DerivationBuilderCallbacks - { - DerivationGoal & goal; - - DerivationGoalCallbacks(DerivationGoal & goal, std::unique_ptr & builder) - : goal{goal} - {} - - ~DerivationGoalCallbacks() override = default; - - void childStarted(Descriptor builderOut) override - { - goal.worker.childStarted(goal.shared_from_this(), {builderOut}, true, true); - } - - void childTerminated() override - { - goal.worker.childTerminated(&goal); - } - - void noteHashMismatch() override - { - goal.worker.hashMismatch = true; - } - - void noteCheckMismatch() override - { - goal.worker.checkMismatch = true; - } - - void markContentsGood(const StorePath & path) override - { - goal.worker.markContentsGood(path); - } - - Path openLogFile() override { - return goal.openLogFile(); - } - void closeLogFile() override { - goal.closeLogFile(); - } - SingleDrvOutputs assertPathValidity() override { - return goal.assertPathValidity(); - } - void appendLogTailErrorMsg(std::string & msg) override { - goal.appendLogTailErrorMsg(msg); - } - }; - - /* If we have to wait and retry (see below), then `builder` will - already be created, so we don't need to create it again. */ - builder = makeDerivationBuilder( - worker.store, - std::make_unique(*this, builder), - DerivationBuilderParams { - drvPath, - buildMode, - buildResult, - *drv, - parsedDrv.get(), - *drvOptions, - inputPaths, - initialOutputs, - act - }); - } - - if (!builder->prepareBuild()) { - if (!actLock) - actLock = std::make_unique(*logger, lvlWarn, actBuildWaiting, - fmt("waiting for a free build user ID for '%s'", Magenta(worker.store.printStorePath(drvPath)))); - co_await waitForAWhile(); - continue; - } - - break; - } - - actLock.reset(); - - try { - - /* Okay, we have to build. */ - builder->startBuilder(); - - } catch (BuildError & e) { - outputLocks.unlock(); - builder->buildUser.reset(); - worker.permanentFailure = true; - co_return done(BuildResult::InputRejected, {}, std::move(e)); - } - - started(); - co_await Suspend{}; - - trace("build done"); - - auto res = builder->unprepareBuild(); - // N.B. cannot use `std::visit` with co-routine return - if (auto * ste = std::get_if<0>(&res)) { - outputLocks.unlock(); - co_return done(std::move(ste->first), {}, std::move(ste->second)); - } else if (auto * builtOutputs = std::get_if<1>(&res)) { - /* It is now safe to delete the lock files, since all future - lockers will see that the output paths are valid; they will - not create new lock files with the same names as the old - (unlinked) lock files. */ - outputLocks.setDeletion(true); - outputLocks.unlock(); - co_return done(BuildResult::Built, std::move(*builtOutputs)); - } else { - unreachable(); - } -#endif -} - - -Goal::Co DerivationGoal::repairClosure() +Goal::Co DerivationGoal::repairClosure(StorePath drvPath) { assert(!drv->type().isImpure()); @@ -936,7 +353,7 @@ Goal::Co DerivationGoal::repairClosure() that produced those outputs. */ /* Get the output closure. */ - auto outputs = queryDerivationOutputMap(); + auto outputs = queryDerivationOutputMap(drvPath); StorePathSet outputClosure; for (auto & i : outputs) { if (!wantedOutputs.contains(i.first)) continue; @@ -951,7 +368,12 @@ Goal::Co DerivationGoal::repairClosure() derivation is responsible for which path in the output closure. */ StorePathSet inputClosure; - if (useDerivation) worker.store.computeFSClosure(drvPath, inputClosure); + + /* If we're working from an in-memory derivation with no in-store + `*.drv` file, we cannot do this part. */ + if (worker.store.isValidPath(drvPath)) + worker.store.computeFSClosure(drvPath, inputClosure); + std::map outputsToDrv; for (auto & i : inputClosure) if (i.isDerivation()) { @@ -989,478 +411,43 @@ Goal::Co DerivationGoal::repairClosure() throw Error("some paths in the output closure of derivation '%s' could not be repaired", worker.store.printStorePath(drvPath)); } - co_return done(BuildResult::AlreadyValid, assertPathValidity()); + co_return done(drvPath, BuildResult::AlreadyValid, assertPathValidity(drvPath)); } -void runPostBuildHook( - Store & store, - Logger & logger, - const StorePath & drvPath, - const StorePathSet & outputPaths) -{ - auto hook = settings.postBuildHook; - if (hook == "") - return; - - Activity act(logger, lvlTalkative, actPostBuildHook, - fmt("running post-build-hook '%s'", settings.postBuildHook), - Logger::Fields{store.printStorePath(drvPath)}); - PushActivity pact(act.id); - std::map hookEnvironment = getEnv(); - - hookEnvironment.emplace("DRV_PATH", store.printStorePath(drvPath)); - hookEnvironment.emplace("OUT_PATHS", chomp(concatStringsSep(" ", store.printStorePathSet(outputPaths)))); - hookEnvironment.emplace("NIX_CONFIG", globalConfig.toKeyValue()); - - struct LogSink : Sink { - Activity & act; - std::string currentLine; - - LogSink(Activity & act) : act(act) { } - - void operator() (std::string_view data) override { - for (auto c : data) { - if (c == '\n') { - flushLine(); - } else { - currentLine += c; - } - } - } - - void flushLine() { - act.result(resPostBuildLogLine, currentLine); - currentLine.clear(); - } - - ~LogSink() { - if (currentLine != "") { - currentLine += '\n'; - flushLine(); - } - } - }; - LogSink sink(act); - - runProgram2({ - .program = settings.postBuildHook, - .environment = hookEnvironment, - .standardOut = &sink, - .mergeStderrToStdout = true, - }); -} - - -void DerivationGoal::appendLogTailErrorMsg(std::string & msg) -{ - if (!logger->isVerbose() && !logTail.empty()) { - msg += fmt("\nLast %d log lines:\n", logTail.size()); - for (auto & line : logTail) { - msg += "> "; - msg += line; - msg += "\n"; - } - auto nixLogCommand = "nix log"; - // The command is on a separate line for easy copying, such as with triple click. - // This message will be indented elsewhere, so removing the indentation before the - // command will not put it at the start of the line unfortunately. - msg += fmt("For full logs, run:\n " ANSI_BOLD "%s %s" ANSI_NORMAL, - nixLogCommand, - worker.store.printStorePath(drvPath)); - } -} - - -Goal::Co DerivationGoal::hookDone() -{ -#ifndef _WIN32 - assert(hook); -#endif - - trace("hook build done"); - - /* Since we got an EOF on the logger pipe, the builder is presumed - to have terminated. In fact, the builder could also have - simply have closed its end of the pipe, so just to be sure, - kill it. */ - int status = -#ifndef _WIN32 // TODO enable build hook on Windows - hook->pid.kill(); -#else - 0; -#endif - - debug("build hook for '%s' finished", worker.store.printStorePath(drvPath)); - - buildResult.timesBuilt++; - buildResult.stopTime = time(0); - - /* So the child is gone now. */ - worker.childTerminated(this); - - /* Close the read side of the logger pipe. */ -#ifndef _WIN32 // TODO enable build hook on Windows - hook->builderOut.readSide.close(); - hook->fromHook.readSide.close(); -#endif - - /* Close the log file. */ - closeLogFile(); - - /* Check the exit status. */ - if (!statusOk(status)) { - auto msg = fmt( - "Cannot build '%s'.\n" - "Reason: " ANSI_RED "builder %s" ANSI_NORMAL ".", - Magenta(worker.store.printStorePath(drvPath)), - statusToString(status)); - - msg += showKnownOutputs(worker.store, *drv); - - appendLogTailErrorMsg(msg); - - outputLocks.unlock(); - - /* TODO (once again) support fine-grained error codes, see issue #12641. */ - - co_return done(BuildResult::MiscFailure, {}, BuildError(msg)); - } - - /* Compute the FS closure of the outputs and register them as - being valid. */ - auto builtOutputs = - /* When using a build hook, the build hook can register the output - as valid (by doing `nix-store --import'). If so we don't have - to do anything here. - - We can only early return when the outputs are known a priori. For - floating content-addressing derivations this isn't the case. - */ - assertPathValidity(); - - StorePathSet outputPaths; - for (auto & [_, output] : builtOutputs) - outputPaths.insert(output.outPath); - runPostBuildHook( - worker.store, - *logger, - drvPath, - outputPaths - ); - - /* It is now safe to delete the lock files, since all future - lockers will see that the output paths are valid; they will - not create new lock files with the same names as the old - (unlinked) lock files. */ - outputLocks.setDeletion(true); - outputLocks.unlock(); - - co_return done(BuildResult::Built, std::move(builtOutputs)); -} - -HookReply DerivationGoal::tryBuildHook() -{ -#ifdef _WIN32 // TODO enable build hook on Windows - return rpDecline; -#else - if (settings.buildHook.get().empty() || !worker.tryBuildHook || !useDerivation) return rpDecline; - - if (!worker.hook) - worker.hook = std::make_unique(); - - try { - - /* Send the request to the hook. */ - worker.hook->sink - << "try" - << (worker.getNrLocalBuilds() < settings.maxBuildJobs ? 1 : 0) - << drv->platform - << worker.store.printStorePath(drvPath) - << drvOptions->getRequiredSystemFeatures(*drv); - worker.hook->sink.flush(); - - /* Read the first line of input, which should be a word indicating - whether the hook wishes to perform the build. */ - std::string reply; - while (true) { - auto s = [&]() { - try { - return readLine(worker.hook->fromHook.readSide.get()); - } catch (Error & e) { - e.addTrace({}, "while reading the response from the build hook"); - throw; - } - }(); - if (handleJSONLogMessage(s, worker.act, worker.hook->activities, "the build hook", true)) - ; - else if (s.substr(0, 2) == "# ") { - reply = s.substr(2); - break; - } - else { - s += "\n"; - writeToStderr(s); - } - } - - debug("hook reply is '%1%'", reply); - - if (reply == "decline") - return rpDecline; - else if (reply == "decline-permanently") { - worker.tryBuildHook = false; - worker.hook = 0; - return rpDecline; - } - else if (reply == "postpone") - return rpPostpone; - else if (reply != "accept") - throw Error("bad hook reply '%s'", reply); - - } catch (SysError & e) { - if (e.errNo == EPIPE) { - printError( - "build hook died unexpectedly: %s", - chomp(drainFD(worker.hook->fromHook.readSide.get()))); - worker.hook = 0; - return rpDecline; - } else - throw; - } - - hook = std::move(worker.hook); - - try { - machineName = readLine(hook->fromHook.readSide.get()); - } catch (Error & e) { - e.addTrace({}, "while reading the machine name from the build hook"); - throw; - } - - CommonProto::WriteConn conn { hook->sink }; - - /* Tell the hook all the inputs that have to be copied to the - remote system. */ - CommonProto::write(worker.store, conn, inputPaths); - - /* Tell the hooks the missing outputs that have to be copied back - from the remote system. */ - { - StringSet missingOutputs; - for (auto & [outputName, status] : initialOutputs) { - // XXX: Does this include known CA outputs? - if (buildMode != bmCheck && status.known && status.known->isValid()) continue; - missingOutputs.insert(outputName); - } - CommonProto::write(worker.store, conn, missingOutputs); - } - - hook->sink = FdSink(); - hook->toHook.writeSide.close(); - - /* Create the log file and pipe. */ - [[maybe_unused]] Path logFile = openLogFile(); - - std::set fds; - fds.insert(hook->fromHook.readSide.get()); - fds.insert(hook->builderOut.readSide.get()); - worker.childStarted(shared_from_this(), fds, false, false); - - return rpAccept; -#endif -} - - -Path DerivationGoal::openLogFile() -{ - logSize = 0; - - if (!settings.keepLog) return ""; - - auto baseName = std::string(baseNameOf(worker.store.printStorePath(drvPath))); - - /* Create a log file. */ - Path logDir; - if (auto localStore = dynamic_cast(&worker.store)) - logDir = localStore->config->logDir; - else - logDir = settings.nixLogDir; - Path dir = fmt("%s/%s/%s/", logDir, LocalFSStore::drvsLogDir, baseName.substr(0, 2)); - createDirs(dir); - - Path logFileName = fmt("%s/%s%s", dir, baseName.substr(2), - settings.compressLog ? ".bz2" : ""); - - fdLogFile = toDescriptor(open(logFileName.c_str(), O_CREAT | O_WRONLY | O_TRUNC -#ifndef _WIN32 - | O_CLOEXEC -#endif - , 0666)); - if (!fdLogFile) throw SysError("creating log file '%1%'", logFileName); - - logFileSink = std::make_shared(fdLogFile.get()); - - if (settings.compressLog) - logSink = std::shared_ptr(makeCompressionSink("bzip2", *logFileSink)); - else - logSink = logFileSink; - - return logFileName; -} - - -void DerivationGoal::closeLogFile() -{ - auto logSink2 = std::dynamic_pointer_cast(logSink); - if (logSink2) logSink2->finish(); - if (logFileSink) logFileSink->flush(); - logSink = logFileSink = 0; - fdLogFile.close(); -} - - -bool DerivationGoal::isReadDesc(Descriptor fd) -{ -#ifdef _WIN32 // TODO enable build hook on Windows - return false; -#else - return - (hook && fd == hook->builderOut.readSide.get()) - || - (builder && fd == builder->builderOut.get()); -#endif -} - -void DerivationGoal::handleChildOutput(Descriptor fd, std::string_view data) -{ - // local & `ssh://`-builds are dealt with here. - auto isWrittenToLog = isReadDesc(fd); - if (isWrittenToLog) - { - logSize += data.size(); - if (settings.maxLogSize && logSize > settings.maxLogSize) { - killChild(); - // We're not inside a coroutine, hence we can't use co_return here. - // Thus we ignore the return value. - [[maybe_unused]] Done _ = done( - BuildResult::LogLimitExceeded, {}, - Error("%s killed after writing more than %d bytes of log output", - getName(), settings.maxLogSize)); - return; - } - - for (auto c : data) - if (c == '\r') - currentLogLinePos = 0; - else if (c == '\n') - flushLine(); - else { - if (currentLogLinePos >= currentLogLine.size()) - currentLogLine.resize(currentLogLinePos + 1); - currentLogLine[currentLogLinePos++] = c; - } - - if (logSink) (*logSink)(data); - } - -#ifndef _WIN32 // TODO enable build hook on Windows - if (hook && fd == hook->fromHook.readSide.get()) { - for (auto c : data) - if (c == '\n') { - auto json = parseJSONMessage(currentHookLine, "the derivation builder"); - if (json) { - auto s = handleJSONLogMessage(*json, worker.act, hook->activities, "the derivation builder", true); - // ensure that logs from a builder using `ssh-ng://` as protocol - // are also available to `nix log`. - if (s && !isWrittenToLog && logSink) { - const auto type = (*json)["type"]; - const auto fields = (*json)["fields"]; - if (type == resBuildLogLine) { - (*logSink)((fields.size() > 0 ? fields[0].get() : "") + "\n"); - } else if (type == resSetPhase && ! fields.is_null()) { - const auto phase = fields[0]; - if (! phase.is_null()) { - // nixpkgs' stdenv produces lines in the log to signal - // phase changes. - // We want to get the same lines in case of remote builds. - // The format is: - // @nix { "action": "setPhase", "phase": "$curPhase" } - const auto logLine = nlohmann::json::object({ - {"action", "setPhase"}, - {"phase", phase} - }); - (*logSink)("@nix " + logLine.dump(-1, ' ', false, nlohmann::json::error_handler_t::replace) + "\n"); - } - } - } - } - currentHookLine.clear(); - } else - currentHookLine += c; - } -#endif -} - - -void DerivationGoal::handleEOF(Descriptor fd) -{ - if (!currentLogLine.empty()) flushLine(); - worker.wakeUp(shared_from_this()); -} - - -void DerivationGoal::flushLine() -{ - if (handleJSONLogMessage(currentLogLine, *act, builderActivities, "the derivation builder", false)) - ; - - else { - logTail.push_back(currentLogLine); - if (logTail.size() > settings.logLines) logTail.pop_front(); - - act->result(resBuildLogLine, currentLogLine); - } - - currentLogLine = ""; - currentLogLinePos = 0; -} - - -std::map> DerivationGoal::queryPartialDerivationOutputMap() +std::map> DerivationGoal::queryPartialDerivationOutputMap(const StorePath & drvPath) { assert(!drv->type().isImpure()); - if (!useDerivation || drv->type().hasKnownOutputPaths()) { - std::map> res; - for (auto & [name, output] : drv->outputs) - res.insert_or_assign(name, output.path(worker.store, drv->name, name)); - return res; - } else { - for (auto * drvStore : { &worker.evalStore, &worker.store }) - if (drvStore->isValidPath(drvPath)) - return worker.store.queryPartialDerivationOutputMap(drvPath, drvStore); - assert(false); - } + + for (auto * drvStore : { &worker.evalStore, &worker.store }) + if (drvStore->isValidPath(drvPath)) + return worker.store.queryPartialDerivationOutputMap(drvPath, drvStore); + + /* In-memory derivation will naturally fall back on this case, where + we do best-effort with static information. */ + std::map> res; + for (auto & [name, output] : drv->outputs) + res.insert_or_assign(name, output.path(worker.store, drv->name, name)); + return res; } -OutputPathMap DerivationGoal::queryDerivationOutputMap() +OutputPathMap DerivationGoal::queryDerivationOutputMap(const StorePath & drvPath) { assert(!drv->type().isImpure()); - if (!useDerivation || drv->type().hasKnownOutputPaths()) { - OutputPathMap res; - for (auto & [name, output] : drv->outputsAndOptPaths(worker.store)) - res.insert_or_assign(name, *output.second); - return res; - } else { - for (auto * drvStore : { &worker.evalStore, &worker.store }) - if (drvStore->isValidPath(drvPath)) - return worker.store.queryDerivationOutputMap(drvPath, drvStore); - assert(false); - } + + for (auto * drvStore : { &worker.evalStore, &worker.store }) + if (drvStore->isValidPath(drvPath)) + return worker.store.queryDerivationOutputMap(drvPath, drvStore); + + // See comment in `DerivationGoal::queryPartialDerivationOutputMap`. + OutputPathMap res; + for (auto & [name, output] : drv->outputsAndOptPaths(worker.store)) + res.insert_or_assign(name, *output.second); + return res; } -std::pair DerivationGoal::checkPathValidity() +std::pair DerivationGoal::checkPathValidity(const StorePath & drvPath) { if (drv->type().isImpure()) return { false, {} }; @@ -1475,10 +462,10 @@ std::pair DerivationGoal::checkPathValidity() }, wantedOutputs.raw); SingleDrvOutputs validOutputs; - for (auto & i : queryPartialDerivationOutputMap()) { + for (auto & i : queryPartialDerivationOutputMap(drvPath)) { auto initialOutput = get(initialOutputs, i.first); if (!initialOutput) - // this is an invalid output, gets catched with (!wantedOutputsLeft.empty()) + // this is an invalid output, gets caught with (!wantedOutputsLeft.empty()) continue; auto & info = *initialOutput; info.wanted = wantedOutputs.contains(i.first); @@ -1540,9 +527,9 @@ std::pair DerivationGoal::checkPathValidity() } -SingleDrvOutputs DerivationGoal::assertPathValidity() +SingleDrvOutputs DerivationGoal::assertPathValidity(const StorePath & drvPath) { - auto [allValid, validOutputs] = checkPathValidity(); + auto [allValid, validOutputs] = checkPathValidity(drvPath); if (!allValid) throw Error("some outputs are unexpectedly invalid"); return validOutputs; @@ -1550,11 +537,11 @@ SingleDrvOutputs DerivationGoal::assertPathValidity() Goal::Done DerivationGoal::done( + const StorePath & drvPath, BuildResult::Status status, SingleDrvOutputs builtOutputs, std::optional ex) { - outputLocks.unlock(); buildResult.status = status; if (ex) buildResult.errorMsg = fmt("%s", Uncolored(ex->info().msg)); @@ -1564,7 +551,6 @@ Goal::Done DerivationGoal::done( worker.permanentFailure = true; mcExpectedBuilds.reset(); - mcRunningBuilds.reset(); if (buildResult.success()) { auto wantedBuiltOutputs = filterDrvOutputs(wantedOutputs, std::move(builtOutputs)); @@ -1587,12 +573,12 @@ Goal::Done DerivationGoal::done( } logger->result( - act ? act->id : getCurActivity(), + getCurActivity(), resBuildResult, nlohmann::json( KeyedBuildResult( buildResult, - DerivedPath::Built{.drvPath = makeConstantStorePathRef(drvPath), .outputs = wantedOutputs}))); + DerivedPath::Built{.drvPath = makeConstantStorePathRef(drvPath), .outputs = OutputsSpec::All{}}))); return amDone(buildResult.success() ? ecSuccess : ecFailed, std::move(ex)); } diff --git a/src/libstore/build/drv-output-substitution-goal.cc b/src/libstore/build/drv-output-substitution-goal.cc index c553eeedb..e87a796f6 100644 --- a/src/libstore/build/drv-output-substitution-goal.cc +++ b/src/libstore/build/drv-output-substitution-goal.cc @@ -12,7 +12,7 @@ DrvOutputSubstitutionGoal::DrvOutputSubstitutionGoal( Worker & worker, RepairFlag repair, std::optional ca) - : Goal(worker) + : Goal(worker, init()) , id(id) { name = fmt("substitution of '%s'", id.to_string()); @@ -139,7 +139,7 @@ Goal::Co DrvOutputSubstitutionGoal::realisationFetched(Goals waitees, std::share if (nrFailed > 0) { debug("The output path of the derivation output '%s' could not be substituted", id.to_string()); - co_return amDone(nrNoSubstituters > 0 || nrIncompleteClosure > 0 ? ecIncompleteClosure : ecFailed); + co_return amDone(nrNoSubstituters > 0 ? ecNoSubstituters : ecFailed); } worker.store.registerDrvOutput(*outputInfo); diff --git a/src/libstore/build/entry-points.cc b/src/libstore/build/entry-points.cc index c934b0704..39fd471c4 100644 --- a/src/libstore/build/entry-points.cc +++ b/src/libstore/build/entry-points.cc @@ -30,7 +30,7 @@ void Store::buildPaths(const std::vector & reqs, BuildMode buildMod if (i->exitCode != Goal::ecSuccess) { #ifndef _WIN32 // TODO Enable building on Windows if (auto i2 = dynamic_cast(i.get())) - failed.insert(printStorePath(i2->drvPath)); + failed.insert(i2->drvReq->to_string(*this)); else #endif if (auto i2 = dynamic_cast(i.get())) diff --git a/src/libstore/build/goal.cc b/src/libstore/build/goal.cc index d2feb34c7..8a8d79283 100644 --- a/src/libstore/build/goal.cc +++ b/src/libstore/build/goal.cc @@ -151,11 +151,11 @@ Goal::Done Goal::amDone(ExitCode result, std::optional ex) trace("done"); assert(top_co); assert(exitCode == ecBusy); - assert(result == ecSuccess || result == ecFailed || result == ecNoSubstituters || result == ecIncompleteClosure); + assert(result == ecSuccess || result == ecFailed || result == ecNoSubstituters); exitCode = result; if (ex) { - if (!waiters.empty()) + if (!preserveException && !waiters.empty()) logError(ex->info()); else this->ex = std::move(*ex); @@ -170,12 +170,10 @@ Goal::Done Goal::amDone(ExitCode result, std::optional ex) goal->trace(fmt("waitee '%s' done; %d left", name, goal->waitees.size())); - if (result == ecFailed || result == ecNoSubstituters || result == ecIncompleteClosure) ++goal->nrFailed; + if (result == ecFailed || result == ecNoSubstituters) ++goal->nrFailed; if (result == ecNoSubstituters) ++goal->nrNoSubstituters; - if (result == ecIncompleteClosure) ++goal->nrIncompleteClosure; - if (goal->waitees.empty()) { worker.wakeUp(goal); } else if (result == ecFailed && !settings.keepGoing) { diff --git a/src/libstore/build/substitution-goal.cc b/src/libstore/build/substitution-goal.cc index c07f309e4..428fec25b 100644 --- a/src/libstore/build/substitution-goal.cc +++ b/src/libstore/build/substitution-goal.cc @@ -12,7 +12,7 @@ namespace nix { PathSubstitutionGoal::PathSubstitutionGoal(const StorePath & storePath, Worker & worker, RepairFlag repair, std::optional ca) - : Goal(worker) + : Goal(worker, init()) , storePath(storePath) , repair(repair) , ca(ca) @@ -181,7 +181,7 @@ Goal::Co PathSubstitutionGoal::tryToRun(StorePath subPath, nix::ref sub, if (nrFailed > 0) { co_return done( - nrNoSubstituters > 0 || nrIncompleteClosure > 0 ? ecIncompleteClosure : ecFailed, + nrNoSubstituters > 0 ? ecNoSubstituters : ecFailed, BuildResult::DependencyFailed, fmt("some references of path '%s' could not be realised", worker.store.printStorePath(storePath))); } diff --git a/src/libstore/build/worker.cc b/src/libstore/build/worker.cc index dd3692f41..6b8ac2e27 100644 --- a/src/libstore/build/worker.cc +++ b/src/libstore/build/worker.cc @@ -4,6 +4,7 @@ #include "nix/store/build/substitution-goal.hh" #include "nix/store/build/drv-output-substitution-goal.hh" #include "nix/store/build/derivation-goal.hh" +#include "nix/store/build/derivation-building-goal.hh" #ifndef _WIN32 // TODO Enable building on Windows # include "nix/store/build/hook-instance.hh" #endif @@ -41,13 +42,23 @@ Worker::~Worker() assert(expectedNarSize == 0); } +template +std::shared_ptr Worker::initGoalIfNeeded(std::weak_ptr & goal_weak, Args && ...args) +{ + if (auto goal = goal_weak.lock()) return goal; + + auto goal = std::make_shared(args...); + goal_weak = goal; + wakeUp(goal); + return goal; +} std::shared_ptr Worker::makeDerivationGoalCommon( - const StorePath & drvPath, + ref drvReq, const OutputsSpec & wantedOutputs, std::function()> mkDrvGoal) { - std::weak_ptr & goal_weak = derivationGoals[drvPath]; + std::weak_ptr & goal_weak = derivationGoals.ensureSlot(*drvReq).value; std::shared_ptr goal = goal_weak.lock(); if (!goal) { goal = mkDrvGoal(); @@ -60,29 +71,30 @@ std::shared_ptr Worker::makeDerivationGoalCommon( } -std::shared_ptr Worker::makeDerivationGoal(const StorePath & drvPath, +std::shared_ptr Worker::makeDerivationGoal(ref drvReq, const OutputsSpec & wantedOutputs, BuildMode buildMode) { - return makeDerivationGoalCommon(drvPath, wantedOutputs, [&]() -> std::shared_ptr { - return std::make_shared(drvPath, wantedOutputs, *this, buildMode); + return makeDerivationGoalCommon(drvReq, wantedOutputs, [&]() -> std::shared_ptr { + return std::make_shared(drvReq, wantedOutputs, *this, buildMode); }); } std::shared_ptr Worker::makeBasicDerivationGoal(const StorePath & drvPath, const BasicDerivation & drv, const OutputsSpec & wantedOutputs, BuildMode buildMode) { - return makeDerivationGoalCommon(drvPath, wantedOutputs, [&]() -> std::shared_ptr { + return makeDerivationGoalCommon(makeConstantStorePathRef(drvPath), wantedOutputs, [&]() -> std::shared_ptr { return std::make_shared(drvPath, drv, wantedOutputs, *this, buildMode); }); } -std::shared_ptr Worker::makePathSubstitutionGoal(const StorePath & path, RepairFlag repair, std::optional ca) +std::shared_ptr Worker::makeDerivationBuildingGoal(const StorePath & drvPath, + const Derivation & drv, BuildMode buildMode) { - std::weak_ptr & goal_weak = substitutionGoals[path]; + std::weak_ptr & goal_weak = derivationBuildingGoals[drvPath]; auto goal = goal_weak.lock(); // FIXME if (!goal) { - goal = std::make_shared(path, *this, repair, ca); + goal = std::make_shared(drvPath, drv, *this, buildMode); goal_weak = goal; wakeUp(goal); } @@ -90,16 +102,15 @@ std::shared_ptr Worker::makePathSubstitutionGoal(const Sto } +std::shared_ptr Worker::makePathSubstitutionGoal(const StorePath & path, RepairFlag repair, std::optional ca) +{ + return initGoalIfNeeded(substitutionGoals[path], path, *this, repair, ca); +} + + std::shared_ptr Worker::makeDrvOutputSubstitutionGoal(const DrvOutput& id, RepairFlag repair, std::optional ca) { - std::weak_ptr & goal_weak = drvOutputSubstitutionGoals[id]; - auto goal = goal_weak.lock(); // FIXME - if (!goal) { - goal = std::make_shared(id, *this, repair, ca); - goal_weak = goal; - wakeUp(goal); - } - return goal; + return initGoalIfNeeded(drvOutputSubstitutionGoals[id], id, *this, repair, ca); } @@ -107,10 +118,7 @@ GoalPtr Worker::makeGoal(const DerivedPath & req, BuildMode buildMode) { return std::visit(overloaded { [&](const DerivedPath::Built & bfd) -> GoalPtr { - if (auto bop = std::get_if(&*bfd.drvPath)) - return makeDerivationGoal(bop->path, bfd.outputs, buildMode); - else - throw UnimplementedError("Building dynamic derivations in one shot is not yet implemented."); + return makeDerivationGoal(bfd.drvPath, bfd.outputs, buildMode); }, [&](const DerivedPath::Opaque & bo) -> GoalPtr { return makePathSubstitutionGoal(bo.path, buildMode == bmRepair ? Repair : NoRepair); @@ -119,27 +127,48 @@ GoalPtr Worker::makeGoal(const DerivedPath & req, BuildMode buildMode) } +template +static void cullMap(std::map & goalMap, F f) +{ + for (auto i = goalMap.begin(); i != goalMap.end();) + if (!f(i->second)) + i = goalMap.erase(i); + else ++i; +} + + template static void removeGoal(std::shared_ptr goal, std::map> & goalMap) { /* !!! inefficient */ - for (auto i = goalMap.begin(); - i != goalMap.end(); ) - if (i->second.lock() == goal) { - auto j = i; ++j; - goalMap.erase(i); - i = j; - } - else ++i; + cullMap(goalMap, [&](const std::weak_ptr & gp) -> bool { + return gp.lock() != goal; + }); +} + +template +static void removeGoal(std::shared_ptr goal, std::map>::ChildNode> & goalMap); + +template +static void removeGoal(std::shared_ptr goal, std::map>::ChildNode> & goalMap) +{ + /* !!! inefficient */ + cullMap(goalMap, [&](DerivedPathMap>::ChildNode & node) -> bool { + if (node.value.lock() == goal) + node.value.reset(); + removeGoal(goal, node.childMap); + return !node.value.expired() || !node.childMap.empty(); + }); } void Worker::removeGoal(GoalPtr goal) { if (auto drvGoal = std::dynamic_pointer_cast(goal)) - nix::removeGoal(drvGoal, derivationGoals); - else - if (auto subGoal = std::dynamic_pointer_cast(goal)) + nix::removeGoal(drvGoal, derivationGoals.map); + else if (auto drvBuildingGoal = std::dynamic_pointer_cast(goal)) + nix::removeGoal(drvBuildingGoal, derivationBuildingGoals); + else if (auto subGoal = std::dynamic_pointer_cast(goal)) nix::removeGoal(subGoal, substitutionGoals); else if (auto subGoal = std::dynamic_pointer_cast(goal)) nix::removeGoal(subGoal, drvOutputSubstitutionGoals); @@ -202,6 +231,9 @@ void Worker::childStarted(GoalPtr goal, const std::set 0); nrLocalBuilds--; break; + case JobCategory::Administration: + /* Intentionally not limited, see docs */ + break; default: unreachable(); } @@ -279,7 +314,7 @@ void Worker::run(const Goals & _topGoals) topGoals.insert(i); if (auto goal = dynamic_cast(i.get())) { topPaths.push_back(DerivedPath::Built { - .drvPath = makeConstantStorePathRef(goal->drvPath), + .drvPath = goal->drvReq, .outputs = goal->wantedOutputs, }); } else @@ -289,9 +324,7 @@ void Worker::run(const Goals & _topGoals) } /* Call queryMissing() to efficiently query substitutes. */ - StorePathSet willBuild, willSubstitute, unknown; - uint64_t downloadSize, narSize; - store.queryMissing(topPaths, willBuild, willSubstitute, unknown, downloadSize, narSize); + store.queryMissing(topPaths); debug("entered goal loop"); @@ -327,23 +360,14 @@ void Worker::run(const Goals & _topGoals) else if (awake.empty() && 0U == settings.maxBuildJobs) { if (getMachines().empty()) throw Error( - R"( - Unable to start any build; - either increase '--max-jobs' or enable remote builds. - - For more information run 'man nix.conf' and search for '/machines'. - )" - ); + "Unable to start any build; either increase '--max-jobs' or enable remote builds.\n" + "\n" + "For more information run 'man nix.conf' and search for '/machines'."); else throw Error( - R"( - Unable to start any build; - remote machines may not have all required system features. - - For more information run 'man nix.conf' and search for '/machines'. - )" - ); - + "Unable to start any build; remote machines may not have all required system features.\n" + "\n" + "For more information run 'man nix.conf' and search for '/machines'."); } else assert(!awake.empty()); } diff --git a/src/libstore/daemon.cc b/src/libstore/daemon.cc index 4bca75228..b946ccbb5 100644 --- a/src/libstore/daemon.cc +++ b/src/libstore/daemon.cc @@ -949,14 +949,12 @@ static void performOp(TunnelLogger * logger, ref store, case WorkerProto::Op::QueryMissing: { auto targets = WorkerProto::Serialise::read(*store, rconn); logger->startWork(); - StorePathSet willBuild, willSubstitute, unknown; - uint64_t downloadSize, narSize; - store->queryMissing(targets, willBuild, willSubstitute, unknown, downloadSize, narSize); + auto missing = store->queryMissing(targets); logger->stopWork(); - WorkerProto::write(*store, wconn, willBuild); - WorkerProto::write(*store, wconn, willSubstitute); - WorkerProto::write(*store, wconn, unknown); - conn.to << downloadSize << narSize; + WorkerProto::write(*store, wconn, missing.willBuild); + WorkerProto::write(*store, wconn, missing.willSubstitute); + WorkerProto::write(*store, wconn, missing.unknown); + conn.to << missing.downloadSize << missing.narSize; break; } diff --git a/src/libstore/derivations.cc b/src/libstore/derivations.cc index 42de5ee0c..0657a7499 100644 --- a/src/libstore/derivations.cc +++ b/src/libstore/derivations.cc @@ -412,7 +412,7 @@ Derivation parseDerivation( expect(str, "rvWithVersion("); auto versionS = parseString(str); if (*versionS == "xp-dyn-drv") { - // Only verison we have so far + // Only version we have so far version = DerivationATermVersion::DynamicDerivations; xpSettings.require(Xp::DynamicDerivations); } else { @@ -553,7 +553,7 @@ static void unparseDerivedPathMapNode(const StoreDirConfig & store, std::string * derivation? * * In other words, does it on the output of derivation that is itself an - * ouput of a derivation? This corresponds to a dependency that is an + * output of a derivation? This corresponds to a dependency that is an * inductive derived path with more than one layer of * `DerivedPath::Built`. */ @@ -1333,6 +1333,11 @@ nlohmann::json Derivation::toJSON(const StoreDirConfig & store) const res["args"] = args; res["env"] = env; + if (auto it = env.find("__json"); it != env.end()) { + res["env"].erase("__json"); + res["structuredAttrs"] = nlohmann::json::parse(it->second); + } + return res; } @@ -1396,7 +1401,17 @@ Derivation Derivation::fromJSON( res.platform = getString(valueAt(json, "system")); res.builder = getString(valueAt(json, "builder")); res.args = getStringList(valueAt(json, "args")); - res.env = getStringMap(valueAt(json, "env")); + + auto envJson = valueAt(json, "env"); + try { + res.env = getStringMap(envJson); + } catch (Error & e) { + e.addTrace({}, "while reading key 'env'"); + throw; + } + + if (auto structuredAttrs = get(json, "structuredAttrs")) + res.env.insert_or_assign("__json", structuredAttrs->dump()); return res; } diff --git a/src/libstore/derived-path-map.cc b/src/libstore/derived-path-map.cc index b785dddd9..408d1a6b9 100644 --- a/src/libstore/derived-path-map.cc +++ b/src/libstore/derived-path-map.cc @@ -52,6 +52,7 @@ typename DerivedPathMap::ChildNode * DerivedPathMap::findSlot(const Single // instantiations +#include "nix/store/build/derivation-goal.hh" namespace nix { template<> @@ -68,4 +69,7 @@ std::strong_ordering DerivedPathMap::ChildNode::operator <=> ( template struct DerivedPathMap::ChildNode; template struct DerivedPathMap; +template struct DerivedPathMap>; + + }; diff --git a/src/libstore/filetransfer.cc b/src/libstore/filetransfer.cc index 164cb37a7..50e0fcf2a 100644 --- a/src/libstore/filetransfer.cc +++ b/src/libstore/filetransfer.cc @@ -14,7 +14,7 @@ #endif #ifdef __linux__ -# include "nix/util/namespaces.hh" +# include "nix/util/linux-namespaces.hh" #endif #include diff --git a/src/libstore/gc.cc b/src/libstore/gc.cc index 75773d6c1..d1bbe1571 100644 --- a/src/libstore/gc.cc +++ b/src/libstore/gc.cc @@ -790,7 +790,7 @@ void LocalStore::collectGarbage(const GCOptions & options, GCResults & results) deleteFromStore(path.to_string()); referrersCache.erase(path); } catch (PathInUse &e) { - // If we end up here, it's likely a new occurence + // If we end up here, it's likely a new occurrence // of https://github.com/NixOS/nix/issues/11923 printError("BUG: %s", e.what()); } diff --git a/src/libstore/globals.cc b/src/libstore/globals.cc index e4c1f8819..df2f80ce0 100644 --- a/src/libstore/globals.cc +++ b/src/libstore/globals.cc @@ -85,7 +85,7 @@ Settings::Settings() builders = concatStringsSep("\n", ss); } -#if defined(__linux__) && defined(SANDBOX_SHELL) +#if (defined(__linux__) || defined(__FreeBSD__)) && defined(SANDBOX_SHELL) sandboxPaths = tokenizeString("/bin/sh=" SANDBOX_SHELL); #endif diff --git a/src/libstore/include/nix/store/build-result.hh b/src/libstore/include/nix/store/build-result.hh index 40b3cdcf1..23ced29cb 100644 --- a/src/libstore/include/nix/store/build-result.hh +++ b/src/libstore/include/nix/store/build-result.hh @@ -16,7 +16,7 @@ struct BuildResult { /** * @note This is directly used in the nix-store --serve protocol. - * That means we need to worry about compatability across versions. + * That means we need to worry about compatibility across versions. * Therefore, don't remove status codes, and only add new status * codes at the end of the list. */ diff --git a/src/libstore/include/nix/store/build/derivation-building-goal.hh b/src/libstore/include/nix/store/build/derivation-building-goal.hh new file mode 100644 index 000000000..bff2e7a89 --- /dev/null +++ b/src/libstore/include/nix/store/build/derivation-building-goal.hh @@ -0,0 +1,194 @@ +#pragma once +///@file + +#include "nix/store/parsed-derivations.hh" +#include "nix/store/derivations.hh" +#include "nix/store/derivation-options.hh" +#include "nix/store/build/derivation-building-misc.hh" +#include "nix/store/outputs-spec.hh" +#include "nix/store/store-api.hh" +#include "nix/store/pathlocks.hh" +#include "nix/store/build/goal.hh" + +namespace nix { + +using std::map; + +#ifndef _WIN32 // TODO enable build hook on Windows +struct HookInstance; +struct DerivationBuilder; +#endif + +typedef enum {rpAccept, rpDecline, rpPostpone} HookReply; + +/** Used internally */ +void runPostBuildHook( + Store & store, + Logger & logger, + const StorePath & drvPath, + const StorePathSet & outputPaths); + +/** + * A goal for building some or all of the outputs of a derivation. + */ +struct DerivationBuildingGoal : public Goal +{ + /** The path of the derivation. */ + StorePath drvPath; + + /** + * The derivation stored at drvPath. + */ + std::unique_ptr drv; + + std::unique_ptr parsedDrv; + std::unique_ptr drvOptions; + + /** + * The remainder is state held during the build. + */ + + /** + * Locks on (fixed) output paths. + */ + PathLocks outputLocks; + + /** + * All input paths (that is, the union of FS closures of the + * immediate input paths). + */ + StorePathSet inputPaths; + + std::map initialOutputs; + + /** + * File descriptor for the log file. + */ + AutoCloseFD fdLogFile; + std::shared_ptr logFileSink, logSink; + + /** + * Number of bytes received from the builder's stdout/stderr. + */ + unsigned long logSize; + + /** + * The most recent log lines. + */ + std::list logTail; + + std::string currentLogLine; + size_t currentLogLinePos = 0; // to handle carriage return + + std::string currentHookLine; + +#ifndef _WIN32 // TODO enable build hook on Windows + /** + * The build hook. + */ + std::unique_ptr hook; + + std::unique_ptr builder; +#endif + + BuildMode buildMode; + + std::unique_ptr> mcRunningBuilds; + + std::unique_ptr act; + + /** + * Activity that denotes waiting for a lock. + */ + std::unique_ptr actLock; + + std::map builderActivities; + + /** + * The remote machine on which we're building. + */ + std::string machineName; + + DerivationBuildingGoal(const StorePath & drvPath, const Derivation & drv, + Worker & worker, + BuildMode buildMode = bmNormal); + ~DerivationBuildingGoal(); + + void timedOut(Error && ex) override; + + std::string key() override; + + /** + * The states. + */ + Co gaveUpOnSubstitution(); + Co tryToBuild(); + Co hookDone(); + + /** + * Is the build hook willing to perform the build? + */ + HookReply tryBuildHook(); + + /** + * Open a log file and a pipe to it. + */ + Path openLogFile(); + + /** + * Close the log file. + */ + void closeLogFile(); + + bool isReadDesc(Descriptor fd); + + /** + * Callback used by the worker to write to the log. + */ + void handleChildOutput(Descriptor fd, std::string_view data) override; + void handleEOF(Descriptor fd) override; + void flushLine(); + + /** + * Wrappers around the corresponding Store methods that first consult the + * derivation. This is currently needed because when there is no drv file + * there also is no DB entry. + */ + std::map> queryPartialDerivationOutputMap(); + + /** + * Update 'initialOutputs' to determine the current status of the + * outputs of the derivation. Also returns a Boolean denoting + * whether all outputs are valid and non-corrupt, and a + * 'SingleDrvOutputs' structure containing the valid outputs. + */ + std::pair checkPathValidity(); + + /** + * Aborts if any output is not valid or corrupt, and otherwise + * returns a 'SingleDrvOutputs' structure containing all outputs. + */ + SingleDrvOutputs assertPathValidity(); + + /** + * Forcibly kill the child process, if any. + */ + void killChild(); + + void started(); + + Done done( + BuildResult::Status status, + SingleDrvOutputs builtOutputs = {}, + std::optional ex = {}); + + void appendLogTailErrorMsg(std::string & msg); + + StorePathSet exportReferences(const StorePathSet & storePaths); + + JobCategory jobCategory() const override { + return JobCategory::Build; + }; +}; + +} diff --git a/src/libstore/include/nix/store/build/derivation-building-misc.hh b/src/libstore/include/nix/store/build/derivation-building-misc.hh index 915d891d7..3259c5e36 100644 --- a/src/libstore/include/nix/store/build/derivation-building-misc.hh +++ b/src/libstore/include/nix/store/build/derivation-building-misc.hh @@ -1,6 +1,6 @@ #pragma once /** - * @file Misc type defitions for both local building and remote (RPC building) + * @file Misc type definitions for both local building and remote (RPC building) */ #include "nix/util/hash.hh" diff --git a/src/libstore/include/nix/store/build/derivation-goal.hh b/src/libstore/include/nix/store/build/derivation-goal.hh index 485a34ec4..9d4257cb3 100644 --- a/src/libstore/include/nix/store/build/derivation-goal.hh +++ b/src/libstore/include/nix/store/build/derivation-goal.hh @@ -14,13 +14,6 @@ namespace nix { using std::map; -#ifndef _WIN32 // TODO enable build hook on Windows -struct HookInstance; -struct DerivationBuilder; -#endif - -typedef enum {rpAccept, rpDecline, rpPostpone} HookReply; - /** Used internally */ void runPostBuildHook( Store & store, @@ -33,13 +26,8 @@ void runPostBuildHook( */ struct DerivationGoal : public Goal { - /** - * Whether to use an on-disk .drv file. - */ - bool useDerivation; - /** The path of the derivation. */ - StorePath drvPath; + ref drvReq; /** * The specific outputs that we need to build. @@ -54,7 +42,7 @@ struct DerivationGoal : public Goal * The goal state machine is progressing based on the current value of * `wantedOutputs. No actions are needed. */ - OutputsUnmodifedDontNeed, + OutputsUnmodifiedDontNeed, /** * `wantedOutputs` has been extended, but the state machine is * proceeding according to its old value, so we need to restart. @@ -71,116 +59,32 @@ struct DerivationGoal : public Goal /** * Whether additional wanted outputs have been added. */ - NeedRestartForMoreOutputs needRestart = NeedRestartForMoreOutputs::OutputsUnmodifedDontNeed; + NeedRestartForMoreOutputs needRestart = NeedRestartForMoreOutputs::OutputsUnmodifiedDontNeed; /** - * See `retrySubstitution`; just for that field. - */ - enum RetrySubstitution { - /** - * No issues have yet arose, no need to restart. - */ - NoNeed, - /** - * Something failed and there is an incomplete closure. Let's retry - * substituting. - */ - YesNeed, - /** - * We are current or have already retried substitution, and whether or - * not something goes wrong we will not retry again. - */ - AlreadyRetried, - }; - - /** - * Whether to retry substituting the outputs after building the - * inputs. This is done in case of an incomplete closure. - */ - RetrySubstitution retrySubstitution = RetrySubstitution::NoNeed; - - /** - * The derivation stored at drvPath. + * The derivation stored at `drvReq`. */ std::unique_ptr drv; - std::unique_ptr parsedDrv; - std::unique_ptr drvOptions; - /** * The remainder is state held during the build. */ - /** - * Locks on (fixed) output paths. - */ - PathLocks outputLocks; - - /** - * All input paths (that is, the union of FS closures of the - * immediate input paths). - */ - StorePathSet inputPaths; - std::map initialOutputs; - /** - * File descriptor for the log file. - */ - AutoCloseFD fdLogFile; - std::shared_ptr logFileSink, logSink; - - /** - * Number of bytes received from the builder's stdout/stderr. - */ - unsigned long logSize; - - /** - * The most recent log lines. - */ - std::list logTail; - - std::string currentLogLine; - size_t currentLogLinePos = 0; // to handle carriage return - - std::string currentHookLine; - -#ifndef _WIN32 // TODO enable build hook on Windows - /** - * The build hook. - */ - std::unique_ptr hook; - - std::unique_ptr builder; -#endif - BuildMode buildMode; - std::unique_ptr> mcExpectedBuilds, mcRunningBuilds; + std::unique_ptr> mcExpectedBuilds; - std::unique_ptr act; - - /** - * Activity that denotes waiting for a lock. - */ - std::unique_ptr actLock; - - std::map builderActivities; - - /** - * The remote machine on which we're building. - */ - std::string machineName; - - DerivationGoal(const StorePath & drvPath, + DerivationGoal(ref drvReq, const OutputsSpec & wantedOutputs, Worker & worker, BuildMode buildMode = bmNormal); DerivationGoal(const StorePath & drvPath, const BasicDerivation & drv, const OutputsSpec & wantedOutputs, Worker & worker, BuildMode buildMode = bmNormal); - ~DerivationGoal(); + ~DerivationGoal() = default; - void timedOut(Error && ex) override; + void timedOut(Error && ex) override { unreachable(); }; std::string key() override; @@ -192,43 +96,16 @@ struct DerivationGoal : public Goal /** * The states. */ - Co init() override; - Co haveDerivation(); - Co gaveUpOnSubstitution(); - Co tryToBuild(); - Co hookDone(); - - /** - * Is the build hook willing to perform the build? - */ - HookReply tryBuildHook(); - - /** - * Open a log file and a pipe to it. - */ - Path openLogFile(); - - /** - * Close the log file. - */ - void closeLogFile(); - - bool isReadDesc(Descriptor fd); - - /** - * Callback used by the worker to write to the log. - */ - void handleChildOutput(Descriptor fd, std::string_view data) override; - void handleEOF(Descriptor fd) override; - void flushLine(); + Co loadDerivation(); + Co haveDerivation(StorePath drvPath); /** * Wrappers around the corresponding Store methods that first consult the * derivation. This is currently needed because when there is no drv file * there also is no DB entry. */ - std::map> queryPartialDerivationOutputMap(); - OutputPathMap queryDerivationOutputMap(); + std::map> queryPartialDerivationOutputMap(const StorePath & drvPath); + OutputPathMap queryDerivationOutputMap(const StorePath & drvPath); /** * Update 'initialOutputs' to determine the current status of the @@ -236,34 +113,24 @@ struct DerivationGoal : public Goal * whether all outputs are valid and non-corrupt, and a * 'SingleDrvOutputs' structure containing the valid outputs. */ - std::pair checkPathValidity(); + std::pair checkPathValidity(const StorePath & drvPath); /** * Aborts if any output is not valid or corrupt, and otherwise * returns a 'SingleDrvOutputs' structure containing all outputs. */ - SingleDrvOutputs assertPathValidity(); + SingleDrvOutputs assertPathValidity(const StorePath & drvPath); - /** - * Forcibly kill the child process, if any. - */ - void killChild(); - - Co repairClosure(); - - void started(); + Co repairClosure(StorePath drvPath); Done done( + const StorePath & drvPath, BuildResult::Status status, SingleDrvOutputs builtOutputs = {}, std::optional ex = {}); - void appendLogTailErrorMsg(std::string & msg); - - StorePathSet exportReferences(const StorePathSet & storePaths); - JobCategory jobCategory() const override { - return JobCategory::Build; + return JobCategory::Administration; }; }; diff --git a/src/libstore/include/nix/store/build/drv-output-substitution-goal.hh b/src/libstore/include/nix/store/build/drv-output-substitution-goal.hh index a00de41ad..0176f001a 100644 --- a/src/libstore/include/nix/store/build/drv-output-substitution-goal.hh +++ b/src/libstore/include/nix/store/build/drv-output-substitution-goal.hh @@ -33,7 +33,7 @@ public: typedef void (DrvOutputSubstitutionGoal::*GoalState)(); GoalState state; - Co init() override; + Co init(); Co realisationFetched(Goals waitees, std::shared_ptr outputInfo, nix::ref sub); void timedOut(Error && ex) override { unreachable(); }; diff --git a/src/libstore/include/nix/store/build/goal.hh b/src/libstore/include/nix/store/build/goal.hh index 9be27f6b3..577ce1e84 100644 --- a/src/libstore/include/nix/store/build/goal.hh +++ b/src/libstore/include/nix/store/build/goal.hh @@ -50,6 +50,16 @@ enum struct JobCategory { * A substitution an arbitrary store object; it will use network resources. */ Substitution, + /** + * A goal that does no "real" work by itself, and just exists to depend on + * other goals which *do* do real work. These goals therefore are not + * limited. + * + * These goals cannot infinitely create themselves, so there is no risk of + * a "fork bomb" type situation (which would be a problem even though the + * goal do no real work) either. + */ + Administration, }; struct Goal : public std::enable_shared_from_this @@ -61,7 +71,7 @@ private: Goals waitees; public: - typedef enum {ecBusy, ecSuccess, ecFailed, ecNoSubstituters, ecIncompleteClosure} ExitCode; + typedef enum {ecBusy, ecSuccess, ecFailed, ecNoSubstituters} ExitCode; /** * Backlink to the worker. @@ -85,12 +95,6 @@ public: */ size_t nrNoSubstituters = 0; - /** - * Number of substitution goals we are/were waiting for that - * failed because they had unsubstitutable references. - */ - size_t nrIncompleteClosure = 0; - /** * Name of this goal for debugging purposes. */ @@ -344,17 +348,6 @@ protected: */ std::optional top_co; - /** - * The entry point for the goal - */ - virtual Co init() = 0; - - /** - * Wrapper around @ref init since virtual functions - * can't be used in constructors. - */ - inline Co init_wrapper(); - /** * Signals that the goal is done. * `co_return` the result. If you're not inside a coroutine, you can ignore @@ -377,13 +370,24 @@ public: */ BuildResult getBuildResult(const DerivedPath &) const; + /** + * Hack to say that this goal should not log `ex`, but instead keep + * it around. Set by a waitee which sees itself as the designated + * continuation of this goal, responsible for reporting its + * successes or failures. + * + * @todo this is yet another not-nice hack in the goal system that + * we ought to get rid of. See #11927 + */ + bool preserveException = false; + /** * Exception containing an error message, if any. */ std::optional ex; - Goal(Worker & worker) - : worker(worker), top_co(init_wrapper()) + Goal(Worker & worker, Co init) + : worker(worker), top_co(std::move(init)) { // top_co shouldn't have a goal already, should be nullptr. assert(!top_co->handle.promise().goal); @@ -446,7 +450,3 @@ template struct std::coroutine_traits { using promise_type = nix::Goal::promise_type; }; - -nix::Goal::Co nix::Goal::init_wrapper() { - co_return init(); -} diff --git a/src/libstore/include/nix/store/build/substitution-goal.hh b/src/libstore/include/nix/store/build/substitution-goal.hh index 7b68b0821..b61706840 100644 --- a/src/libstore/include/nix/store/build/substitution-goal.hh +++ b/src/libstore/include/nix/store/build/substitution-goal.hh @@ -64,7 +64,7 @@ public: /** * The states. */ - Co init() override; + Co init(); Co gotInfo(); Co tryToRun(StorePath subPath, nix::ref sub, std::shared_ptr info, bool & substituterFailed); Co finished(); diff --git a/src/libstore/include/nix/store/build/worker.hh b/src/libstore/include/nix/store/build/worker.hh index 7e03a0c2f..c70c72377 100644 --- a/src/libstore/include/nix/store/build/worker.hh +++ b/src/libstore/include/nix/store/build/worker.hh @@ -3,6 +3,7 @@ #include "nix/util/types.hh" #include "nix/store/store-api.hh" +#include "nix/store/derived-path-map.hh" #include "nix/store/build/goal.hh" #include "nix/store/realisation.hh" #include "nix/util/muxable-pipe.hh" @@ -14,6 +15,7 @@ namespace nix { /* Forward definition. */ struct DerivationGoal; +struct DerivationBuildingGoal; struct PathSubstitutionGoal; class DrvOutputSubstitutionGoal; @@ -103,7 +105,10 @@ private: * Maps used to prevent multiple instantiations of a goal for the * same derivation / path. */ - std::map> derivationGoals; + + DerivedPathMap> derivationGoals; + + std::map> derivationBuildingGoals; std::map> substitutionGoals; std::map> drvOutputSubstitutionGoals; @@ -196,17 +201,27 @@ public: * @ref DerivationGoal "derivation goal" */ private: + template + std::shared_ptr initGoalIfNeeded(std::weak_ptr & goal_weak, Args && ...args); + std::shared_ptr makeDerivationGoalCommon( - const StorePath & drvPath, const OutputsSpec & wantedOutputs, + ref drvReq, const OutputsSpec & wantedOutputs, std::function()> mkDrvGoal); public: std::shared_ptr makeDerivationGoal( - const StorePath & drvPath, + ref drvReq, const OutputsSpec & wantedOutputs, BuildMode buildMode = bmNormal); std::shared_ptr makeBasicDerivationGoal( const StorePath & drvPath, const BasicDerivation & drv, const OutputsSpec & wantedOutputs, BuildMode buildMode = bmNormal); + /** + * @ref DerivationBuildingGoal "derivation goal" + */ + std::shared_ptr makeDerivationBuildingGoal( + const StorePath & drvPath, const Derivation & drv, + BuildMode buildMode = bmNormal); + /** * @ref PathSubstitutionGoal "substitution goal" */ diff --git a/src/libstore/include/nix/store/common-protocol-impl.hh b/src/libstore/include/nix/store/common-protocol-impl.hh index 18e63ac33..e9c726a99 100644 --- a/src/libstore/include/nix/store/common-protocol-impl.hh +++ b/src/libstore/include/nix/store/common-protocol-impl.hh @@ -4,7 +4,7 @@ * * Template implementations (as opposed to mere declarations). * - * This file is an exmample of the "impl.hh" pattern. See the + * This file is an example of the "impl.hh" pattern. See the * contributing guide. */ diff --git a/src/libstore/include/nix/store/common-protocol.hh b/src/libstore/include/nix/store/common-protocol.hh index 7887120b5..1dc4aa7c5 100644 --- a/src/libstore/include/nix/store/common-protocol.hh +++ b/src/libstore/include/nix/store/common-protocol.hh @@ -89,12 +89,12 @@ DECLARE_COMMON_SERIALISER(std::map); * that the underlying types never serialize to the empty string. * * We do this instead of a generic std::optional instance because - * ordinal tags (0 or 1, here) are a bit of a compatability hazard. For + * ordinal tags (0 or 1, here) are a bit of a compatibility hazard. For * the same reason, we don't have a std::variant instances (ordinal * tags 0...n). * * We could the generic instances and then these as specializations for - * compatability, but that's proven a bit finnicky, and also makes the + * compatibility, but that's proven a bit finnicky, and also makes the * worker protocol harder to implement in other languages where such * specializations may not be allowed. */ diff --git a/src/libstore/include/nix/store/derivation-options.hh b/src/libstore/include/nix/store/derivation-options.hh index 16730b5c9..f61a43e60 100644 --- a/src/libstore/include/nix/store/derivation-options.hh +++ b/src/libstore/include/nix/store/derivation-options.hh @@ -170,7 +170,7 @@ struct DerivationOptions /** * Parse this information from its legacy encoding as part of the * environment. This should not be used with nice greenfield formats - * (e.g. JSON) but is necessary for supporing old formats (e.g. + * (e.g. JSON) but is necessary for supporting old formats (e.g. * ATerm). */ static DerivationOptions diff --git a/src/libstore/include/nix/store/derivations.hh b/src/libstore/include/nix/store/derivations.hh index 46a9e2d02..a813137bc 100644 --- a/src/libstore/include/nix/store/derivations.hh +++ b/src/libstore/include/nix/store/derivations.hh @@ -214,7 +214,7 @@ struct DerivationType { /** * Impure derivation type * - * This is similar at buil-time to the content addressed, not standboxed, not fixed + * This is similar at build-time to the content addressed, not standboxed, not fixed * type, but has some restrictions on its usage. */ struct Impure { diff --git a/src/libstore/include/nix/store/derived-path-map.hh b/src/libstore/include/nix/store/derived-path-map.hh index cad86d1b4..16ffeb05e 100644 --- a/src/libstore/include/nix/store/derived-path-map.hh +++ b/src/libstore/include/nix/store/derived-path-map.hh @@ -21,8 +21,11 @@ namespace nix { * * @param V A type to instantiate for each output. It should probably * should be an "optional" type so not every interior node has to have a - * value. `* const Something` or `std::optional` would be - * good choices for "optional" types. + * value. For example, the scheduler uses + * `DerivedPathMap>` to + * remember which goals correspond to which outputs. `* const Something` + * or `std::optional` would also be good choices for + * "optional" types. */ template struct DerivedPathMap { diff --git a/src/libstore/include/nix/store/gc-store.hh b/src/libstore/include/nix/store/gc-store.hh index 23261f576..8d9a83e67 100644 --- a/src/libstore/include/nix/store/gc-store.hh +++ b/src/libstore/include/nix/store/gc-store.hh @@ -98,7 +98,7 @@ struct GCResults * Some views have only a no-op temp roots even though others to the * same store allow triggering GC. For instance one can't add a root * over ssh, but that doesn't prevent someone from gc-ing that store - * accesed via SSH locally). + * accessed via SSH locally). * * - The derived `LocalFSStore` class has `LocalFSStore::addPermRoot`, * which is not part of this class because it relies on the notion of diff --git a/src/libstore/include/nix/store/globals.hh b/src/libstore/include/nix/store/globals.hh index 00d7dcd6b..93a54eb07 100644 --- a/src/libstore/include/nix/store/globals.hh +++ b/src/libstore/include/nix/store/globals.hh @@ -515,7 +515,7 @@ public: R"( If set to `true` (the default), build logs written to `/nix/var/log/nix/drvs` are compressed on the fly using bzip2. - Otherwise, they aren't compressed. + Otherwise, they are not compressed. )", {"build-compress-log"}}; @@ -637,8 +637,8 @@ public: location in the sandbox; for instance, `/bin=/nix-bin` mounts the path `/nix-bin` as `/bin` inside the sandbox. If *source* is followed by `?`, then it is not an error if *source* does not exist; - for example, `/dev/nvidiactl?` specifies that `/dev/nvidiactl` only - be mounted in the sandbox if it exists in the host filesystem. + for example, `/dev/nvidiactl?` specifies that `/dev/nvidiactl` + only be mounted in the sandbox if it exists in the host filesystem. If the source is in the Nix store, then its closure is added to the sandbox as well. @@ -682,7 +682,9 @@ public: description of the `size` option of `tmpfs` in mount(8). The default is `50%`. )"}; +#endif +#if defined(__linux__) || defined(__FreeBSD__) Setting sandboxBuildDir{this, "/build", "sandbox-build-dir", R"( *Linux only* @@ -695,14 +697,7 @@ public: Setting> buildDir{this, std::nullopt, "build-dir", R"( - The directory on the host, in which derivations' temporary build directories are created. - - If not set, Nix uses the system temporary directory indicated by the `TMPDIR` environment variable. - Note that builds are often performed by the Nix daemon, so its `TMPDIR` is used, and not that of the Nix command line interface. - - This is also the location where [`--keep-failed`](@docroot@/command-ref/opt-common.md#opt-keep-failed) leaves its files. - - If Nix runs without sandbox, or if the platform does not support sandboxing with bind mounts (e.g. macOS), then the [`builder`](@docroot@/language/derivations.md#attr-builder)'s environment contains this directory instead of the virtual location [`sandbox-build-dir`](#conf-sandbox-build-dir). + Override the `build-dir` store setting for all stores that have this setting. )"}; Setting allowedImpureHostPrefixes{this, {}, "allowed-impure-host-deps", diff --git a/src/libstore/include/nix/store/local-store.hh b/src/libstore/include/nix/store/local-store.hh index 9a118fcc5..fd7e6fc36 100644 --- a/src/libstore/include/nix/store/local-store.hh +++ b/src/libstore/include/nix/store/local-store.hh @@ -34,7 +34,39 @@ struct OptimiseStats uint64_t bytesFreed = 0; }; -struct LocalStoreConfig : std::enable_shared_from_this, virtual LocalFSStoreConfig +struct LocalBuildStoreConfig : virtual LocalFSStoreConfig +{ + +private: + /** + Input for computing the build directory. See `getBuildDir()`. + */ + Setting> buildDir{this, std::nullopt, "build-dir", + R"( + The directory on the host, in which derivations' temporary build directories are created. + + If not set, Nix will use the `builds` subdirectory of its configured state directory. + + Note that builds are often performed by the Nix daemon, so its `build-dir` applies. + + Nix will create this directory automatically with suitable permissions if it does not exist. + Otherwise its permissions must allow all users to traverse the directory (i.e. it must have `o+x` set, in unix parlance) for non-sandboxed builds to work correctly. + + This is also the location where [`--keep-failed`](@docroot@/command-ref/opt-common.md#opt-keep-failed) leaves its files. + + If Nix runs without sandbox, or if the platform does not support sandboxing with bind mounts (e.g. macOS), then the [`builder`](@docroot@/language/derivations.md#attr-builder)'s environment will contain this directory, instead of the virtual location [`sandbox-build-dir`](#conf-sandbox-build-dir). + + > **Warning** + > + > `build-dir` must not be set to a world-writable directory. + > Placing temporary build directories in a world-writable place allows other users to access or modify build data that is currently in use. + > This alone is merely an impurity, but combined with another factor this has allowed malicious derivations to escape the build sandbox. + )"}; +public: + Path getBuildDir() const; +}; + +struct LocalStoreConfig : std::enable_shared_from_this, virtual LocalFSStoreConfig, virtual LocalBuildStoreConfig { using LocalFSStoreConfig::LocalFSStoreConfig; diff --git a/src/libstore/include/nix/store/meson.build b/src/libstore/include/nix/store/meson.build index c5aa9b461..a18430417 100644 --- a/src/libstore/include/nix/store/meson.build +++ b/src/libstore/include/nix/store/meson.build @@ -13,6 +13,7 @@ headers = [config_pub_h] + files( 'binary-cache-store.hh', 'build-result.hh', 'build/derivation-goal.hh', + 'build/derivation-building-goal.hh', 'build/derivation-building-misc.hh', 'build/drv-output-substitution-goal.hh', 'build/goal.hh', diff --git a/src/libstore/include/nix/store/outputs-spec.hh b/src/libstore/include/nix/store/outputs-spec.hh index b47f26542..4e874a6f1 100644 --- a/src/libstore/include/nix/store/outputs-spec.hh +++ b/src/libstore/include/nix/store/outputs-spec.hh @@ -13,13 +13,13 @@ namespace nix { /** * An (owned) output name. Just a type alias used to make code more - * readible. + * readable. */ typedef std::string OutputName; /** * A borrowed output name. Just a type alias used to make code more - * readible. + * readable. */ typedef std::string_view OutputNameView; diff --git a/src/libstore/include/nix/store/path-info.hh b/src/libstore/include/nix/store/path-info.hh index 4691bfa95..690f0f813 100644 --- a/src/libstore/include/nix/store/path-info.hh +++ b/src/libstore/include/nix/store/path-info.hh @@ -51,7 +51,7 @@ struct UnkeyedValidPathInfo Hash narHash; /** - * Other store objects this store object referes to. + * Other store objects this store object refers to. */ StorePathSet references; diff --git a/src/libstore/include/nix/store/remote-store.hh b/src/libstore/include/nix/store/remote-store.hh index dd2396fe3..18c02456f 100644 --- a/src/libstore/include/nix/store/remote-store.hh +++ b/src/libstore/include/nix/store/remote-store.hh @@ -149,9 +149,7 @@ struct RemoteStore : void addSignatures(const StorePath & storePath, const StringSet & sigs) override; - void queryMissing(const std::vector & targets, - StorePathSet & willBuild, StorePathSet & willSubstitute, StorePathSet & unknown, - uint64_t & downloadSize, uint64_t & narSize) override; + MissingPaths queryMissing(const std::vector & targets) override; void addBuildLog(const StorePath & drvPath, std::string_view log) override; diff --git a/src/libstore/include/nix/store/serve-protocol-impl.hh b/src/libstore/include/nix/store/serve-protocol-impl.hh index 4ab164721..4e66ca542 100644 --- a/src/libstore/include/nix/store/serve-protocol-impl.hh +++ b/src/libstore/include/nix/store/serve-protocol-impl.hh @@ -4,7 +4,7 @@ * * Template implementations (as opposed to mere declarations). * - * This file is an exmample of the "impl.hh" pattern. See the + * This file is an example of the "impl.hh" pattern. See the * contributing guide. */ diff --git a/src/libstore/include/nix/store/ssh.hh b/src/libstore/include/nix/store/ssh.hh index 40f2189d8..be9cf0c48 100644 --- a/src/libstore/include/nix/store/ssh.hh +++ b/src/libstore/include/nix/store/ssh.hh @@ -62,7 +62,7 @@ public: * * Current implementation is to use `fcntl` with `F_SETPIPE_SZ`, * which is Linux-only. For this implementation, `size` must - * convertable to an `int`. In other words, it must be within + * convertible to an `int`. In other words, it must be within * `[0, INT_MAX]`. */ void trySetBufferSize(size_t size); diff --git a/src/libstore/include/nix/store/store-api.hh b/src/libstore/include/nix/store/store-api.hh index 1648b13c1..e0a3e67d1 100644 --- a/src/libstore/include/nix/store/store-api.hh +++ b/src/libstore/include/nix/store/store-api.hh @@ -71,6 +71,18 @@ struct KeyedBuildResult; typedef std::map> StorePathCAMap; +/** + * Information about what paths will be built or substituted, returned + * by Store::queryMissing(). + */ +struct MissingPaths +{ + StorePathSet willBuild; + StorePathSet willSubstitute; + StorePathSet unknown; + uint64_t downloadSize{0}; + uint64_t narSize{0}; +}; /** * About the class hierarchy of the store types: @@ -382,7 +394,7 @@ public: /** * Query the mapping outputName => outputPath for the given - * derivation. All outputs are mentioned so ones mising the mapping + * derivation. All outputs are mentioned so ones missing the mapping * are mapped to `std::nullopt`. */ virtual std::map> queryPartialDerivationOutputMap( @@ -694,9 +706,7 @@ public: * derivations that will be built, and the set of output paths that * will be substituted. */ - virtual void queryMissing(const std::vector & targets, - StorePathSet & willBuild, StorePathSet & willSubstitute, StorePathSet & unknown, - uint64_t & downloadSize, uint64_t & narSize); + virtual MissingPaths queryMissing(const std::vector & targets); /** * Sort a set of paths topologically under the references @@ -809,7 +819,7 @@ protected: /** * Helper for methods that are not unsupported: this is used for - * default definitions for virtual methods that are meant to be overriden. + * default definitions for virtual methods that are meant to be overridden. * * @todo Using this should be a last resort. It is better to make * the method "virtual pure" and/or move it to a subclass. diff --git a/src/libstore/include/nix/store/store-dir-config.hh b/src/libstore/include/nix/store/store-dir-config.hh index 6bf9ebf14..14e3e7db8 100644 --- a/src/libstore/include/nix/store/store-dir-config.hh +++ b/src/libstore/include/nix/store/store-dir-config.hh @@ -89,7 +89,7 @@ struct MixStoreDirMethods /** * Read-only variant of addToStore(). It returns the store - * path for the given file sytem object. + * path for the given file system object. */ std::pair computeStorePath( std::string_view name, @@ -125,7 +125,7 @@ struct StoreDirConfigBase : Config */ struct StoreDirConfig : StoreDirConfigBase, MixStoreDirMethods { - using Params = std::map; + using Params = StringMap; StoreDirConfig(const Params & params); diff --git a/src/libstore/include/nix/store/store-reference.hh b/src/libstore/include/nix/store/store-reference.hh index 433a347aa..c1b681ba1 100644 --- a/src/libstore/include/nix/store/store-reference.hh +++ b/src/libstore/include/nix/store/store-reference.hh @@ -9,7 +9,7 @@ namespace nix { /** * A parsed Store URI (URI is a slight misnomer...), parsed but not yet - * resolved to a specific instance and query parms validated. + * resolved to a specific instance and query params validated. * * Supported values are: * @@ -41,7 +41,7 @@ namespace nix { */ struct StoreReference { - using Params = std::map; + using Params = StringMap; /** * Special store reference `""` or `"auto"` diff --git a/src/libstore/include/nix/store/store-registration.hh b/src/libstore/include/nix/store/store-registration.hh index 3f82ff51c..17298118e 100644 --- a/src/libstore/include/nix/store/store-registration.hh +++ b/src/libstore/include/nix/store/store-registration.hh @@ -7,7 +7,7 @@ * those implementations. * * Consumers of an arbitrary store from a URL/JSON configuration instead - * just need the defintions `nix/store/store-open.hh`; those do use this + * just need the definitions `nix/store/store-open.hh`; those do use this * but only as an implementation. Consumers of a specific extra type of * store can skip both these, and just use the definition of the store * in question directly. @@ -71,7 +71,7 @@ struct Implementations }; auto [it, didInsert] = registered().insert({TConfig::name(), std::move(factory)}); if (!didInsert) { - throw Error("Already registred store with name '%s'", it->first); + throw Error("Already registered store with name '%s'", it->first); } } }; diff --git a/src/libstore/include/nix/store/worker-protocol-connection.hh b/src/libstore/include/nix/store/worker-protocol-connection.hh index 11f112a71..ce7e9aef4 100644 --- a/src/libstore/include/nix/store/worker-protocol-connection.hh +++ b/src/libstore/include/nix/store/worker-protocol-connection.hh @@ -97,7 +97,7 @@ struct WorkerProto::BasicClientConnection : WorkerProto::BasicConnection /** * After calling handshake, must call this to exchange some basic - * information abou the connection. + * information about the connection. */ ClientHandshakeInfo postHandshake(const StoreDirConfig & store); @@ -157,7 +157,7 @@ struct WorkerProto::BasicServerConnection : WorkerProto::BasicConnection /** * After calling handshake, must call this to exchange some basic - * information abou the connection. + * information about the connection. */ void postHandshake(const StoreDirConfig & store, const ClientHandshakeInfo & info); }; diff --git a/src/libstore/include/nix/store/worker-protocol-impl.hh b/src/libstore/include/nix/store/worker-protocol-impl.hh index 908a9323e..23e6068e9 100644 --- a/src/libstore/include/nix/store/worker-protocol-impl.hh +++ b/src/libstore/include/nix/store/worker-protocol-impl.hh @@ -4,7 +4,7 @@ * * Template implementations (as opposed to mere declarations). * - * This file is an exmample of the "impl.hh" pattern. See the + * This file is an example of the "impl.hh" pattern. See the * contributing guide. */ diff --git a/src/libstore/include/nix/store/worker-protocol.hh b/src/libstore/include/nix/store/worker-protocol.hh index 1b188806d..9630a88c0 100644 --- a/src/libstore/include/nix/store/worker-protocol.hh +++ b/src/libstore/include/nix/store/worker-protocol.hh @@ -89,7 +89,7 @@ struct WorkerProto struct BasicServerConnection; /** - * Extra information provided as part of protocol negotation. + * Extra information provided as part of protocol negotiation. */ struct ClientHandshakeInfo; diff --git a/src/libstore/linux/include/nix/store/meson.build b/src/libstore/linux/include/nix/store/meson.build index a664aefa9..c8e6a8268 100644 --- a/src/libstore/linux/include/nix/store/meson.build +++ b/src/libstore/linux/include/nix/store/meson.build @@ -2,4 +2,5 @@ include_dirs += include_directories('../..') headers += files( 'personality.hh', + # hack for trailing newline ) diff --git a/src/libstore/linux/meson.build b/src/libstore/linux/meson.build index 6fc193cf8..5771cead5 100644 --- a/src/libstore/linux/meson.build +++ b/src/libstore/linux/meson.build @@ -1,5 +1,6 @@ sources += files( 'personality.cc', + # hack for trailing newline ) subdir('include/nix/store') diff --git a/src/libstore/local-store.cc b/src/libstore/local-store.cc index 1ab3ed13a..0d2d96e61 100644 --- a/src/libstore/local-store.cc +++ b/src/libstore/local-store.cc @@ -77,6 +77,16 @@ std::string LocalStoreConfig::doc() ; } +Path LocalBuildStoreConfig::getBuildDir() const +{ + return + settings.buildDir.get().has_value() + ? *settings.buildDir.get() + : buildDir.get().has_value() + ? *buildDir.get() + : stateDir.get() + "/builds"; +} + ref LocalStore::Config::openStore() const { return make_ref(ref{shared_from_this()}); @@ -133,7 +143,7 @@ LocalStore::LocalStore(ref config) Path gcRootsDir = config->stateDir + "/gcroots"; if (!pathExists(gcRootsDir)) { createDirs(gcRootsDir); - createSymlink(profilesDir, gcRootsDir + "/profiles"); + replaceSymlink(profilesDir, gcRootsDir + "/profiles"); } for (auto & perUserDir : {profilesDir + "/per-user", gcRootsDir + "/per-user"}) { diff --git a/src/libstore/meson.build b/src/libstore/meson.build index ea1ea029e..2aff17290 100644 --- a/src/libstore/meson.build +++ b/src/libstore/meson.build @@ -256,6 +256,7 @@ sources = files( 'binary-cache-store.cc', 'build-result.cc', 'build/derivation-goal.cc', + 'build/derivation-building-goal.cc', 'build/drv-output-substitution-goal.cc', 'build/entry-points.cc', 'build/goal.cc', diff --git a/src/libstore/misc.cc b/src/libstore/misc.cc index dabae647f..7c97dbc57 100644 --- a/src/libstore/misc.cc +++ b/src/libstore/misc.cc @@ -98,23 +98,17 @@ const ContentAddress * getDerivationCA(const BasicDerivation & drv) return nullptr; } -void Store::queryMissing(const std::vector & targets, - StorePathSet & willBuild_, StorePathSet & willSubstitute_, StorePathSet & unknown_, - uint64_t & downloadSize_, uint64_t & narSize_) +MissingPaths Store::queryMissing(const std::vector & targets) { Activity act(*logger, lvlDebug, actUnknown, "querying info about missing paths"); - downloadSize_ = narSize_ = 0; - // FIXME: make async. ThreadPool pool(fileTransferSettings.httpConnections); struct State { std::unordered_set done; - StorePathSet & unknown, & willSubstitute, & willBuild; - uint64_t & downloadSize; - uint64_t & narSize; + MissingPaths res; }; struct DrvState @@ -125,7 +119,7 @@ void Store::queryMissing(const std::vector & targets, DrvState(size_t left) : left(left) { } }; - Sync state_(State{{}, unknown_, willSubstitute_, willBuild_, downloadSize_, narSize_}); + Sync state_; std::function doPath; @@ -143,7 +137,7 @@ void Store::queryMissing(const std::vector & targets, auto mustBuildDrv = [&](const StorePath & drvPath, const Derivation & drv) { { auto state(state_.lock()); - state->willBuild.insert(drvPath); + state->res.willBuild.insert(drvPath); } for (const auto & [inputDrv, inputNode] : drv.inputDrvs.map) { @@ -203,7 +197,7 @@ void Store::queryMissing(const std::vector & targets, if (!isValidPath(drvPath)) { // FIXME: we could try to substitute the derivation. auto state(state_.lock()); - state->unknown.insert(drvPath); + state->res.unknown.insert(drvPath); return; } @@ -282,7 +276,7 @@ void Store::queryMissing(const std::vector & targets, if (infos.empty()) { auto state(state_.lock()); - state->unknown.insert(bo.path); + state->res.unknown.insert(bo.path); return; } @@ -291,9 +285,9 @@ void Store::queryMissing(const std::vector & targets, { auto state(state_.lock()); - state->willSubstitute.insert(bo.path); - state->downloadSize += info->second.downloadSize; - state->narSize += info->second.narSize; + state->res.willSubstitute.insert(bo.path); + state->res.downloadSize += info->second.downloadSize; + state->res.narSize += info->second.narSize; } for (auto & ref : info->second.references) @@ -306,6 +300,8 @@ void Store::queryMissing(const std::vector & targets, pool.enqueue(std::bind(doPath, path)); pool.process(); + + return std::move(state_.lock()->res); } diff --git a/src/libstore/profiles.cc b/src/libstore/profiles.cc index b5161b79f..09ef36705 100644 --- a/src/libstore/profiles.cc +++ b/src/libstore/profiles.cc @@ -331,7 +331,7 @@ Path getDefaultProfile() if (!pathExists(profileLink)) { replaceSymlink(profile, profileLink); } - // Backwards compatibiliy measure: Make root's profile available as + // Backwards compatibility measure: Make root's profile available as // `.../default` as it's what NixOS and most of the init scripts expect Path globalProfileLink = settings.nixStateDir + "/profiles/default"; if (isRootUser() && !pathExists(globalProfileLink)) { diff --git a/src/libstore/realisation.cc b/src/libstore/realisation.cc index 635fb6946..9a72422eb 100644 --- a/src/libstore/realisation.cc +++ b/src/libstore/realisation.cc @@ -96,7 +96,7 @@ Realisation Realisation::fromJSON( std::map dependentRealisations; if (auto jsonDependencies = json.find("dependentRealisations"); jsonDependencies != json.end()) - for (auto & [jsonDepId, jsonDepOutPath] : jsonDependencies->get>()) + for (auto & [jsonDepId, jsonDepOutPath] : jsonDependencies->get()) dependentRealisations.insert({DrvOutput::parse(jsonDepId), StorePath(jsonDepOutPath)}); return Realisation{ diff --git a/src/libstore/remote-store.cc b/src/libstore/remote-store.cc index 3151f319c..1b8bad048 100644 --- a/src/libstore/remote-store.cc +++ b/src/libstore/remote-store.cc @@ -855,9 +855,7 @@ void RemoteStore::addSignatures(const StorePath & storePath, const StringSet & s } -void RemoteStore::queryMissing(const std::vector & targets, - StorePathSet & willBuild, StorePathSet & willSubstitute, StorePathSet & unknown, - uint64_t & downloadSize, uint64_t & narSize) +MissingPaths RemoteStore::queryMissing(const std::vector & targets) { { auto conn(getConnection()); @@ -868,16 +866,16 @@ void RemoteStore::queryMissing(const std::vector & targets, conn->to << WorkerProto::Op::QueryMissing; WorkerProto::write(*this, *conn, targets); conn.processStderr(); - willBuild = WorkerProto::Serialise::read(*this, *conn); - willSubstitute = WorkerProto::Serialise::read(*this, *conn); - unknown = WorkerProto::Serialise::read(*this, *conn); - conn->from >> downloadSize >> narSize; - return; + MissingPaths res; + res.willBuild = WorkerProto::Serialise::read(*this, *conn); + res.willSubstitute = WorkerProto::Serialise::read(*this, *conn); + res.unknown = WorkerProto::Serialise::read(*this, *conn); + conn->from >> res.downloadSize >> res.narSize; + return res; } fallback: - return Store::queryMissing(targets, willBuild, willSubstitute, - unknown, downloadSize, narSize); + return Store::queryMissing(targets); } diff --git a/src/libstore/restricted-store.cc b/src/libstore/restricted-store.cc index 0485f5584..69435122a 100644 --- a/src/libstore/restricted-store.cc +++ b/src/libstore/restricted-store.cc @@ -143,13 +143,7 @@ struct RestrictedStore : public virtual IndirectRootStore, public virtual GcStor unsupported("addSignatures"); } - void queryMissing( - const std::vector & targets, - StorePathSet & willBuild, - StorePathSet & willSubstitute, - StorePathSet & unknown, - uint64_t & downloadSize, - uint64_t & narSize) override; + MissingPaths queryMissing(const std::vector & targets) override; virtual std::optional getBuildLogExact(const StorePath & path) override { @@ -306,19 +300,14 @@ std::vector RestrictedStore::buildPathsWithResults( return results; } -void RestrictedStore::queryMissing( - const std::vector & targets, - StorePathSet & willBuild, - StorePathSet & willSubstitute, - StorePathSet & unknown, - uint64_t & downloadSize, - uint64_t & narSize) +MissingPaths RestrictedStore::queryMissing(const std::vector & targets) { /* This is slightly impure since it leaks information to the client about what paths will be built/substituted or are already present. Probably not a big deal. */ std::vector allowed; + StorePathSet unknown; for (auto & req : targets) { if (goal.isAllowed(req)) allowed.emplace_back(req); @@ -326,7 +315,12 @@ void RestrictedStore::queryMissing( unknown.insert(pathPartOfReq(req)); } - next->queryMissing(allowed, willBuild, willSubstitute, unknown, downloadSize, narSize); + auto res = next->queryMissing(allowed); + + for (auto & p : unknown) + res.unknown.insert(p); + + return res; } } diff --git a/src/libstore/ssh-store.cc b/src/libstore/ssh-store.cc index 753256d48..6992ae774 100644 --- a/src/libstore/ssh-store.cc +++ b/src/libstore/ssh-store.cc @@ -120,7 +120,7 @@ std::string MountedSSHStoreConfig::doc() * store. * * MountedSSHStore is very similar to UDSRemoteStore --- ignoring the - * superficial differnce of SSH vs Unix domain sockets, they both are + * superficial difference of SSH vs Unix domain sockets, they both are * accessing remote stores, and they both assume the store will be * mounted in the local filesystem. * diff --git a/src/libstore/ssh.cc b/src/libstore/ssh.cc index 97b75cba1..c8fec5244 100644 --- a/src/libstore/ssh.cc +++ b/src/libstore/ssh.cc @@ -34,7 +34,7 @@ SSHMaster::SSHMaster( throw Error("invalid SSH host name '%s'", host); auto state(state_.lock()); - state->tmpDir = std::make_unique(createTempDir("", "nix", true, true, 0700)); + state->tmpDir = std::make_unique(createTempDir("", "nix", 0700)); } void SSHMaster::addCommonSSHOpts(Strings & args) @@ -83,7 +83,7 @@ bool SSHMaster::isMasterRunning() { Strings createSSHEnv() { // Copy the environment and set SHELL=/bin/sh - std::map env = getEnv(); + StringMap env = getEnv(); // SSH will invoke the "user" shell for -oLocalCommand, but that means // $SHELL. To keep things simple and avoid potential issues with other diff --git a/src/libstore/store-api.cc b/src/libstore/store-api.cc index e8988127e..39de6808d 100644 --- a/src/libstore/store-api.cc +++ b/src/libstore/store-api.cc @@ -337,10 +337,10 @@ digraph graphname { node [shape=box] fileSource -> narSink narSink [style=dashed] - narSink -> unsualHashTee [style = dashed, label = "Recursive && !SHA-256"] + narSink -> unusualHashTee [style = dashed, label = "Recursive && !SHA-256"] narSink -> narHashSink [style = dashed, label = "else"] - unsualHashTee -> narHashSink - unsualHashTee -> caHashSink + unusualHashTee -> narHashSink + unusualHashTee -> caHashSink fileSource -> parseSink parseSink [style=dashed] parseSink-> fileSink [style = dashed, label = "Flat"] @@ -794,15 +794,12 @@ void Store::substitutePaths(const StorePathSet & paths) for (auto & path : paths) if (!path.isDerivation()) paths2.emplace_back(DerivedPath::Opaque{path}); - uint64_t downloadSize, narSize; - StorePathSet willBuild, willSubstitute, unknown; - queryMissing(paths2, - willBuild, willSubstitute, unknown, downloadSize, narSize); + auto missing = queryMissing(paths2); - if (!willSubstitute.empty()) + if (!missing.willSubstitute.empty()) try { std::vector subs; - for (auto & p : willSubstitute) subs.emplace_back(DerivedPath::Opaque{p}); + for (auto & p : missing.willSubstitute) subs.emplace_back(DerivedPath::Opaque{p}); buildPaths(subs); } catch (Error & e) { logWarning(e.info()); diff --git a/src/libstore/unix/build/darwin-derivation-builder.cc b/src/libstore/unix/build/darwin-derivation-builder.cc new file mode 100644 index 000000000..5e06dbe55 --- /dev/null +++ b/src/libstore/unix/build/darwin-derivation-builder.cc @@ -0,0 +1,209 @@ +#ifdef __APPLE__ + +# include +# include +# include + +/* This definition is undocumented but depended upon by all major browsers. */ +extern "C" int +sandbox_init_with_parameters(const char * profile, uint64_t flags, const char * const parameters[], char ** errorbuf); + +namespace nix { + +struct DarwinDerivationBuilder : DerivationBuilderImpl +{ + PathsInChroot pathsInChroot; + + /** + * Whether full sandboxing is enabled. Note that macOS builds + * always have *some* sandboxing (see sandbox-minimal.sb). + */ + bool useSandbox; + + DarwinDerivationBuilder( + Store & store, + std::unique_ptr miscMethods, + DerivationBuilderParams params, + bool useSandbox) + : DerivationBuilderImpl(store, std::move(miscMethods), std::move(params)) + , useSandbox(useSandbox) + { + } + + void prepareSandbox() override + { + pathsInChroot = getPathsInSandbox(); + } + + void setUser() override + { + DerivationBuilderImpl::setUser(); + + /* This has to appear before import statements. */ + std::string sandboxProfile = "(version 1)\n"; + + if (useSandbox) { + + /* Lots and lots and lots of file functions freak out if they can't stat their full ancestry */ + PathSet ancestry; + + /* We build the ancestry before adding all inputPaths to the store because we know they'll + all have the same parents (the store), and there might be lots of inputs. This isn't + particularly efficient... I doubt it'll be a bottleneck in practice */ + for (auto & i : pathsInChroot) { + Path cur = i.first; + while (cur.compare("/") != 0) { + cur = dirOf(cur); + ancestry.insert(cur); + } + } + + /* And we want the store in there regardless of how empty pathsInChroot. We include the innermost + path component this time, since it's typically /nix/store and we care about that. */ + Path cur = store.storeDir; + while (cur.compare("/") != 0) { + ancestry.insert(cur); + cur = dirOf(cur); + } + + /* Add all our input paths to the chroot */ + for (auto & i : inputPaths) { + auto p = store.printStorePath(i); + pathsInChroot.insert_or_assign(p, p); + } + + /* Violations will go to the syslog if you set this. Unfortunately the destination does not appear to be + * configurable */ + if (settings.darwinLogSandboxViolations) { + sandboxProfile += "(deny default)\n"; + } else { + sandboxProfile += "(deny default (with no-log))\n"; + } + + sandboxProfile += +# include "sandbox-defaults.sb" + ; + + if (!derivationType.isSandboxed()) + sandboxProfile += +# include "sandbox-network.sb" + ; + + /* Add the output paths we'll use at build-time to the chroot */ + sandboxProfile += "(allow file-read* file-write* process-exec\n"; + for (auto & [_, path] : scratchOutputs) + sandboxProfile += fmt("\t(subpath \"%s\")\n", store.printStorePath(path)); + + sandboxProfile += ")\n"; + + /* Our inputs (transitive dependencies and any impurities computed above) + + without file-write* allowed, access() incorrectly returns EPERM + */ + sandboxProfile += "(allow file-read* file-write* process-exec\n"; + + // We create multiple allow lists, to avoid exceeding a limit in the darwin sandbox interpreter. + // See https://github.com/NixOS/nix/issues/4119 + // We split our allow groups approximately at half the actual limit, 1 << 16 + const size_t breakpoint = sandboxProfile.length() + (1 << 14); + for (auto & i : pathsInChroot) { + + if (sandboxProfile.length() >= breakpoint) { + debug("Sandbox break: %d %d", sandboxProfile.length(), breakpoint); + sandboxProfile += ")\n(allow file-read* file-write* process-exec\n"; + } + + if (i.first != i.second.source) + throw Error( + "can't map '%1%' to '%2%': mismatched impure paths not supported on Darwin", + i.first, + i.second.source); + + std::string path = i.first; + auto optSt = maybeLstat(path.c_str()); + if (!optSt) { + if (i.second.optional) + continue; + throw SysError("getting attributes of required path '%s", path); + } + if (S_ISDIR(optSt->st_mode)) + sandboxProfile += fmt("\t(subpath \"%s\")\n", path); + else + sandboxProfile += fmt("\t(literal \"%s\")\n", path); + } + sandboxProfile += ")\n"; + + /* Allow file-read* on full directory hierarchy to self. Allows realpath() */ + sandboxProfile += "(allow file-read*\n"; + for (auto & i : ancestry) { + sandboxProfile += fmt("\t(literal \"%s\")\n", i); + } + sandboxProfile += ")\n"; + + sandboxProfile += drvOptions.additionalSandboxProfile; + } else + sandboxProfile += +# include "sandbox-minimal.sb" + ; + + debug("Generated sandbox profile:"); + debug(sandboxProfile); + + /* The tmpDir in scope points at the temporary build directory for our derivation. Some packages try different + mechanisms to find temporary directories, so we want to open up a broader place for them to put their files, + if needed. */ + Path globalTmpDir = canonPath(defaultTempDir(), true); + + /* They don't like trailing slashes on subpath directives */ + while (!globalTmpDir.empty() && globalTmpDir.back() == '/') + globalTmpDir.pop_back(); + + if (getEnv("_NIX_TEST_NO_SANDBOX") != "1") { + Strings sandboxArgs; + sandboxArgs.push_back("_GLOBAL_TMP_DIR"); + sandboxArgs.push_back(globalTmpDir); + if (drvOptions.allowLocalNetworking) { + sandboxArgs.push_back("_ALLOW_LOCAL_NETWORKING"); + sandboxArgs.push_back("1"); + } + char * sandbox_errbuf = nullptr; + if (sandbox_init_with_parameters( + sandboxProfile.c_str(), 0, stringsToCharPtrs(sandboxArgs).data(), &sandbox_errbuf)) { + writeFull( + STDERR_FILENO, + fmt("failed to configure sandbox: %s\n", sandbox_errbuf ? sandbox_errbuf : "(null)")); + _exit(1); + } + } + } + + void execBuilder(const Strings & args, const Strings & envStrs) override + { + posix_spawnattr_t attrp; + + if (posix_spawnattr_init(&attrp)) + throw SysError("failed to initialize builder"); + + if (posix_spawnattr_setflags(&attrp, POSIX_SPAWN_SETEXEC)) + throw SysError("failed to initialize builder"); + + if (drv.platform == "aarch64-darwin") { + // Unset kern.curproc_arch_affinity so we can escape Rosetta + int affinity = 0; + sysctlbyname("kern.curproc_arch_affinity", NULL, NULL, &affinity, sizeof(affinity)); + + cpu_type_t cpu = CPU_TYPE_ARM64; + posix_spawnattr_setbinpref_np(&attrp, 1, &cpu, NULL); + } else if (drv.platform == "x86_64-darwin") { + cpu_type_t cpu = CPU_TYPE_X86_64; + posix_spawnattr_setbinpref_np(&attrp, 1, &cpu, NULL); + } + + posix_spawn( + NULL, drv.builder.c_str(), NULL, &attrp, stringsToCharPtrs(args).data(), stringsToCharPtrs(envStrs).data()); + } +}; + +} + +#endif diff --git a/src/libstore/unix/build/derivation-builder.cc b/src/libstore/unix/build/derivation-builder.cc index 43dfe1832..15f95011d 100644 --- a/src/libstore/unix/build/derivation-builder.cc +++ b/src/libstore/unix/build/derivation-builder.cc @@ -1,22 +1,15 @@ #include "nix/store/build/derivation-builder.hh" +#include "nix/util/file-system.hh" #include "nix/store/local-store.hh" #include "nix/util/processes.hh" -#include "nix/store/indirect-root-store.hh" -#include "nix/store/build/hook-instance.hh" -#include "nix/store/build/worker.hh" #include "nix/store/builtins.hh" -#include "nix/store/builtins/buildenv.hh" #include "nix/store/path-references.hh" #include "nix/util/finally.hh" #include "nix/util/util.hh" #include "nix/util/archive.hh" #include "nix/util/git.hh" -#include "nix/util/compression.hh" #include "nix/store/daemon.hh" #include "nix/util/topo-sort.hh" -#include "nix/util/callback.hh" -#include "nix/util/json-utils.hh" -#include "nix/util/current-process.hh" #include "nix/store/build/child.hh" #include "nix/util/unix-domain-socket.hh" #include "nix/store/posix-fs-canonicalise.hh" @@ -39,35 +32,6 @@ # include #endif -/* Includes required for chroot support. */ -#ifdef __linux__ -# include "linux/fchmodat2-compat.hh" -# include -# include -# include -# include -# include -# include -# include -# include -# include "nix/util/namespaces.hh" -# if HAVE_SECCOMP -# include -# endif -# define pivot_root(new_root, put_old) (syscall(SYS_pivot_root, new_root, put_old)) -# include "nix/util/cgroup.hh" -# include "nix/store/personality.hh" -#endif - -#ifdef __APPLE__ -# include -# include -# include - -/* This definition is undocumented but depended upon by all major browsers. */ -extern "C" int sandbox_init_with_parameters(const char *profile, uint64_t flags, const char *const parameters[], char **errorbuf); -#endif - #include #include #include @@ -92,8 +56,11 @@ MakeError(NotDeterministic, BuildError); * rather than incoming call edges that either should be removed, or * become (higher order) function parameters. */ -class DerivationBuilderImpl : public DerivationBuilder, DerivationBuilderParams +// FIXME: rename this to UnixDerivationBuilder or something like that. +class DerivationBuilderImpl : public DerivationBuilder, public DerivationBuilderParams { +protected: + Store & store; std::unique_ptr miscMethods; @@ -107,16 +74,15 @@ public: : DerivationBuilderParams{std::move(params)} , store{store} , miscMethods{std::move(miscMethods)} + , derivationType{drv.type()} { } - LocalStore & getLocalStore(); - -private: +protected: /** - * The cgroup of the builder, if any. + * User selected for running the builder. */ - std::optional cgroup; + std::unique_ptr buildUser; /** * The temporary directory used for the build. @@ -134,50 +100,12 @@ private: */ AutoCloseFD tmpDirFd; - /** - * The path of the temporary directory in the sandbox. - */ - Path tmpDirInSandbox; - - /** - * Pipe for synchronising updates to the builder namespaces. - */ - Pipe userNamespaceSync; - - /** - * The mount namespace and user namespace of the builder, used to add additional - * paths to the sandbox as a result of recursive Nix calls. - */ - AutoCloseFD sandboxMountNamespace; - AutoCloseFD sandboxUserNamespace; - - /** - * On Linux, whether we're doing the build in its own user - * namespace. - */ - bool usingUserNamespace = true; - - /** - * Whether we're currently doing a chroot build. - */ - bool useChroot = false; - - /** - * The root of the chroot environment. - */ - Path chrootRootDir; - - /** - * RAII object to delete the chroot directory. - */ - std::shared_ptr autoDelChroot; - /** * The sort of derivation we are building. * - * Just a cached value, can be recomputed from `drv`. + * Just a cached value, computed from `drv`. */ - std::optional derivationType; + const DerivationType derivationType; /** * Stuff we need to pass to initChild(). @@ -190,9 +118,8 @@ private: { } }; typedef std::map PathsInChroot; // maps target path to source path - PathsInChroot pathsInChroot; - typedef std::map Environment; + typedef StringMap Environment; Environment env; /** @@ -218,9 +145,6 @@ private: */ OutputPathMap scratchOutputs; - uid_t sandboxUid() { return usingUserNamespace ? (!buildUser || buildUser->getUIDCount() == 1 ? 1000 : 0) : buildUser->getUID(); } - gid_t sandboxGid() { return usingUserNamespace ? (!buildUser || buildUser->getUIDCount() == 1 ? 100 : 0) : buildUser->getGID(); } - const static Path homeDir; /** @@ -259,35 +183,89 @@ private: /** * Whether we need to perform hash rewriting if there are valid output paths. */ - bool needsHashRewrite(); + virtual bool needsHashRewrite() + { + return true; + } public: - /** - * Set up build environment / sandbox, acquiring resources (e.g. - * locks as needed). After this is run, the builder should be - * started. - * - * @returns true if successful, false if we could not acquire a build - * user. In that case, the caller must wait and then try again. - */ bool prepareBuild() override; - /** - * Start building a derivation. - */ - void startBuilder() override;; + void startBuilder() override; + + std::variant, SingleDrvOutputs> unprepareBuild() override; + +protected: /** - * Tear down build environment after the builder exits (either on - * its own or if it is killed). - * - * @returns The first case indicates failure during output - * processing. A status code and exception are returned, providing - * more information. The second case indicates success, and - * realisations for each output of the derivation are returned. + * Acquire a build user lock. Return nullptr if no lock is available. */ - std::variant, SingleDrvOutputs> unprepareBuild() override; + virtual std::unique_ptr getBuildUser() + { + return acquireUserLock(1, false); + } + + /** + * Return the paths that should be made available in the sandbox. + * This includes: + * + * * The paths specified by the `sandbox-paths` setting, and their closure in the Nix store. + * * The contents of the `__impureHostDeps` derivation attribute, if the sandbox is in relaxed mode. + * * The paths returned by the `pre-build-hook`. + * * The paths in the input closure of the derivation. + */ + PathsInChroot getPathsInSandbox(); + + virtual void setBuildTmpDir() + { + tmpDir = topTmpDir; + } + + /** + * Return the path of the temporary directory in the sandbox. + */ + virtual Path tmpDirInSandbox() + { + assert(!topTmpDir.empty()); + return topTmpDir; + } + + /** + * Ensure that there are no processes running that conflict with + * `buildUser`. + */ + virtual void prepareUser() + { + killSandbox(false); + } + + /** + * Called by prepareBuild() to do any setup in the parent to + * prepare for a sandboxed build. + */ + virtual void prepareSandbox(); + + virtual Strings getPreBuildHookArgs() + { + return Strings({store.printStorePath(drvPath)}); + } + + virtual Path realPathInSandbox(const Path & p) + { + return store.toRealPath(p); + } + + /** + * Open the slave side of the pseudoterminal and use it as stderr. + */ + void openSlave(); + + /** + * Called by prepareBuild() to start the child process for the + * build. Must set `pid`. The child must call runChild(). + */ + virtual void startChild(); private: @@ -296,15 +274,14 @@ private: */ void initEnv(); +protected: + /** * Process messages send by the sandbox initialization. */ void processSandboxSetupMessages(); - /** - * Setup tmp dir location. - */ - void initTmpDir(); +private: /** * Write a JSON file containing the derivation attributes. @@ -318,16 +295,14 @@ private: public: - /** - * Stop the in-process nix daemon thread. - * @see startDaemon - */ void stopDaemon() override; private: void addDependency(const StorePath & path) override; +protected: + /** * Make a file owned by the builder. * @@ -353,6 +328,28 @@ private: */ void runChild(); + /** + * Move the current process into the chroot, if any. Called early + * by runChild(). + */ + virtual void enterChroot() + { + } + + /** + * Change the current process's uid/gid to the build user, if + * any. Called by runChild(). + */ + virtual void setUser(); + + /** + * Execute the derivation builder process. Called by runChild() as + * its final step. Should not return unless there is an error. + */ + virtual void execBuilder(const Strings & args, const Strings & envStrs); + +private: + /** * Check that the derivation outputs all exist and register them * as valid. @@ -368,20 +365,17 @@ private: public: - /** - * Delete the temporary directory, if we have one. - */ void deleteTmpDir(bool force) override; - /** - * Kill any processes running under the build user UID or in the - * cgroup of the build. - */ void killSandbox(bool getStats) override; +protected: + + virtual void cleanupBuild(); + private: - bool cleanupDecideWhetherDiskFull(); + bool decideWhetherDiskFull(); /** * Create alternative path calculated from but distinct from the @@ -400,17 +394,6 @@ private: StorePath makeFallbackPath(OutputNameView outputName); }; -std::unique_ptr makeDerivationBuilder( - Store & store, - std::unique_ptr miscMethods, - DerivationBuilderParams params) -{ - return std::make_unique( - store, - std::move(miscMethods), - std::move(params)); -} - void handleDiffHook( uid_t uid, uid_t gid, const Path & tryA, const Path & tryB, @@ -448,18 +431,7 @@ void handleDiffHook( const Path DerivationBuilderImpl::homeDir = "/homeless-shelter"; -inline bool DerivationBuilderImpl::needsHashRewrite() -{ -#ifdef __linux__ - return !useChroot; -#else - /* Darwin requires hash rewriting even when sandboxing is enabled. */ - return true; -#endif -} - - -LocalStore & DerivationBuilderImpl::getLocalStore() +static LocalStore & getLocalStore(Store & store) { auto p = dynamic_cast(&store); assert(p); @@ -469,19 +441,7 @@ LocalStore & DerivationBuilderImpl::getLocalStore() void DerivationBuilderImpl::killSandbox(bool getStats) { - if (cgroup) { - #ifdef __linux__ - auto stats = destroyCgroup(*cgroup); - if (getStats) { - buildResult.cpuUser = stats.cpuUser; - buildResult.cpuSystem = stats.cpuSystem; - } - #else - unreachable(); - #endif - } - - else if (buildUser) { + if (buildUser) { auto uid = buildUser->getUID(); assert(uid != 0); killUser(uid); @@ -491,55 +451,12 @@ void DerivationBuilderImpl::killSandbox(bool getStats) bool DerivationBuilderImpl::prepareBuild() { - /* Cache this */ - derivationType = drv.type(); - - /* Are we doing a chroot build? */ - { - if (settings.sandboxMode == smEnabled) { - if (drvOptions.noChroot) - throw Error("derivation '%s' has '__noChroot' set, " - "but that's not allowed when 'sandbox' is 'true'", store.printStorePath(drvPath)); -#ifdef __APPLE__ - if (drvOptions.additionalSandboxProfile != "") - throw Error("derivation '%s' specifies a sandbox profile, " - "but this is only allowed when 'sandbox' is 'relaxed'", store.printStorePath(drvPath)); -#endif - useChroot = true; - } - else if (settings.sandboxMode == smDisabled) - useChroot = false; - else if (settings.sandboxMode == smRelaxed) - useChroot = derivationType->isSandboxed() && !drvOptions.noChroot; - } - - auto & localStore = getLocalStore(); - if (localStore.storeDir != localStore.config->realStoreDir.get()) { - #ifdef __linux__ - useChroot = true; - #else - throw Error("building using a diverted store is not supported on this platform"); - #endif - } - - #ifdef __linux__ - if (useChroot) { - if (!mountAndPidNamespacesSupported()) { - if (!settings.sandboxFallback) - throw Error("this system does not support the kernel namespaces that are required for sandboxing; use '--no-sandbox' to disable sandboxing"); - debug("auto-disabling sandboxing because the prerequisite namespaces are not available"); - useChroot = false; - } - } - #endif - if (useBuildUsers()) { if (!buildUser) - buildUser = acquireUserLock(drvOptions.useUidRange(drv) ? 65536 : 1, useChroot); + buildUser = getBuildUser(); - if (!buildUser) { + if (!buildUser) return false; - } } return true; @@ -548,6 +465,7 @@ bool DerivationBuilderImpl::prepareBuild() std::variant, SingleDrvOutputs> DerivationBuilderImpl::unprepareBuild() { + // FIXME: get rid of this, rely on RAII. Finally releaseBuildUser([&](){ /* Release the build user at the end of this function. We don't do it right away because we don't want another build grabbing this @@ -555,9 +473,6 @@ std::variant, SingleDrvOutputs> Derivation buildUser.reset(); }); - sandboxMountNamespace = -1; - sandboxUserNamespace = -1; - /* Since we got an EOF on the logger pipe, the builder is presumed to have terminated. In fact, the builder could also have simply have closed its end of the pipe, so just to be sure, @@ -603,7 +518,9 @@ std::variant, SingleDrvOutputs> Derivation /* Check the exit status. */ if (!statusOk(status)) { - diskFull |= cleanupDecideWhetherDiskFull(); + diskFull |= decideWhetherDiskFull(); + + cleanupBuild(); auto msg = fmt( "Cannot build '%s'.\n" @@ -639,25 +556,25 @@ std::variant, SingleDrvOutputs> Derivation for (auto & i : redirectedOutputs) deletePath(store.Store::toRealPath(i.second)); - /* Delete the chroot (if we were using one). */ - autoDelChroot.reset(); /* this runs the destructor */ - deleteTmpDir(true); return std::move(builtOutputs); } catch (BuildError & e) { - assert(derivationType); BuildResult::Status st = dynamic_cast(&e) ? BuildResult::NotDeterministic : statusOk(status) ? BuildResult::OutputRejected : - !derivationType->isSandboxed() || diskFull ? BuildResult::TransientFailure : + !derivationType.isSandboxed() || diskFull ? BuildResult::TransientFailure : BuildResult::PermanentFailure; return std::pair{std::move(st), std::move(e)}; } } +void DerivationBuilderImpl::cleanupBuild() +{ + deleteTmpDir(false); +} static void chmod_(const Path & path, mode_t mode) { @@ -691,29 +608,36 @@ static void replaceValidPath(const Path & storePath, const Path & tmpPath) tmpPath (the replacement), so we have to move it out of the way first. We'd better not be interrupted here, because if we're repairing (say) Glibc, we end up with a broken system. */ - Path oldPath = fmt("%1%.old-%2%-%3%", storePath, getpid(), rand()); - if (pathExists(storePath)) - movePath(storePath, oldPath); + Path oldPath; + if (pathExists(storePath)) { + // why do we loop here? + // although makeTempPath should be unique, we can't + // guarantee that. + do { + oldPath = makeTempPath(storePath, ".old"); + // store paths are often directories so we can't just unlink() it + // let's make sure the path doesn't exist before we try to use it + } while (pathExists(oldPath)); + movePath(storePath, oldPath); + } try { movePath(tmpPath, storePath); } catch (...) { try { // attempt to recover - movePath(oldPath, storePath); + if (!oldPath.empty()) + movePath(oldPath, storePath); } catch (...) { ignoreExceptionExceptInterrupt(); } throw; } - - deletePath(oldPath); + if (!oldPath.empty()) + deletePath(oldPath); } - - - -bool DerivationBuilderImpl::cleanupDecideWhetherDiskFull() +bool DerivationBuilderImpl::decideWhetherDiskFull() { bool diskFull = false; @@ -724,7 +648,7 @@ bool DerivationBuilderImpl::cleanupDecideWhetherDiskFull() so, we don't mark this build as a permanent failure. */ #if HAVE_STATVFS { - auto & localStore = getLocalStore(); + auto & localStore = getLocalStore(store); uint64_t required = 8ULL * 1024 * 1024; // FIXME: make configurable struct statvfs st; if (statvfs(localStore.config->realStoreDir.get().c_str(), &st) == 0 && @@ -736,58 +660,9 @@ bool DerivationBuilderImpl::cleanupDecideWhetherDiskFull() } #endif - deleteTmpDir(false); - - /* Move paths out of the chroot for easier debugging of - build failures. */ - if (useChroot && buildMode == bmNormal) - for (auto & [_, status] : initialOutputs) { - if (!status.known) continue; - if (buildMode != bmCheck && status.known->isValid()) continue; - auto p = store.toRealPath(status.known->path); - if (pathExists(chrootRootDir + p)) - std::filesystem::rename((chrootRootDir + p), p); - } - return diskFull; } - -#ifdef __linux__ -static void doBind(const Path & source, const Path & target, bool optional = false) { - debug("bind mounting '%1%' to '%2%'", source, target); - - auto bindMount = [&]() { - if (mount(source.c_str(), target.c_str(), "", MS_BIND | MS_REC, 0) == -1) - throw SysError("bind mount from '%1%' to '%2%' failed", source, target); - }; - - auto maybeSt = maybeLstat(source); - if (!maybeSt) { - if (optional) - return; - else - throw SysError("getting attributes of path '%1%'", source); - } - auto st = *maybeSt; - - if (S_ISDIR(st.st_mode)) { - createDirs(target); - bindMount(); - } else if (S_ISLNK(st.st_mode)) { - // Symlinks can (apparently) not be bind-mounted, so just copy it - createDirs(dirOf(target)); - copyFile( - std::filesystem::path(source), - std::filesystem::path(target), false); - } else { - createDirs(dirOf(target)); - writeFile(target, ""); - bindMount(); - } -}; -#endif - /** * Rethrow the current exception as a subclass of `Error`. */ @@ -823,62 +698,24 @@ static void handleChildException(bool sendException) } } +static bool checkNotWorldWritable(std::filesystem::path path) +{ + while (true) { + auto st = lstat(path); + if (st.st_mode & S_IWOTH) + return false; + if (path == path.parent_path()) break; + path = path.parent_path(); + } + return true; +} + void DerivationBuilderImpl::startBuilder() { - if ((buildUser && buildUser->getUIDCount() != 1) - #ifdef __linux__ - || settings.useCgroups - #endif - ) - { - #ifdef __linux__ - experimentalFeatureSettings.require(Xp::Cgroups); - - /* If we're running from the daemon, then this will return the - root cgroup of the service. Otherwise, it will return the - current cgroup. */ - auto rootCgroup = getRootCgroup(); - auto cgroupFS = getCgroupFS(); - if (!cgroupFS) - throw Error("cannot determine the cgroups file system"); - auto rootCgroupPath = canonPath(*cgroupFS + "/" + rootCgroup); - if (!pathExists(rootCgroupPath)) - throw Error("expected cgroup directory '%s'", rootCgroupPath); - - static std::atomic counter{0}; - - cgroup = buildUser - ? fmt("%s/nix-build-uid-%d", rootCgroupPath, buildUser->getUID()) - : fmt("%s/nix-build-pid-%d-%d", rootCgroupPath, getpid(), counter++); - - debug("using cgroup '%s'", *cgroup); - - /* When using a build user, record the cgroup we used for that - user so that if we got interrupted previously, we can kill - any left-over cgroup first. */ - if (buildUser) { - auto cgroupsDir = settings.nixStateDir + "/cgroups"; - createDirs(cgroupsDir); - - auto cgroupFile = fmt("%s/%d", cgroupsDir, buildUser->getUID()); - - if (pathExists(cgroupFile)) { - auto prevCgroup = readFile(cgroupFile); - destroyCgroup(prevCgroup); - } - - writeFile(cgroupFile, *cgroup); - } - - #else - throw Error("cgroups are not supported on this platform"); - #endif - } - /* Make sure that no other processes are executing under the sandbox uids. This must be done before any chownToBuilder() calls. */ - killSandbox(false); + prepareUser(); /* Right platform? */ if (!drvOptions.canBuildLocally(store, drv)) { @@ -895,31 +732,23 @@ void DerivationBuilderImpl::startBuilder() // since aarch64-darwin has Rosetta 2, this user can actually run x86_64-darwin on their hardware - we should tell them to run the command to install Darwin 2 if (drv.platform == "x86_64-darwin" && settings.thisSystem == "aarch64-darwin") - msg += fmt("\nNote: run `%s` to run programs for x86_64-darwin", Magenta("/usr/sbin/softwareupdate --install-rosetta")); + msg += fmt("\nNote: run `%s` to run programs for x86_64-darwin", Magenta("/usr/sbin/softwareupdate --install-rosetta && launchctl stop org.nixos.nix-daemon")); throw BuildError(msg); } + auto buildDir = getLocalStore(store).config->getBuildDir(); + + createDirs(buildDir); + + if (buildUser && !checkNotWorldWritable(buildDir)) + throw Error("Path %s or a parent directory is world-writable or a symlink. That's not allowed for security.", buildDir); + /* Create a temporary directory where the build will take place. */ - topTmpDir = createTempDir(settings.buildDir.get().value_or(""), "nix-build-" + std::string(drvPath.name()), false, false, 0700); -#ifdef __APPLE__ - if (false) { -#else - if (useChroot) { -#endif - /* If sandboxing is enabled, put the actual TMPDIR underneath - an inaccessible root-owned directory, to prevent outside - access. - - On macOS, we don't use an actual chroot, so this isn't - possible. Any mitigation along these lines would have to be - done directly in the sandbox profile. */ - tmpDir = topTmpDir + "/build"; - createDir(tmpDir, 0700); - } else { - tmpDir = topTmpDir; - } + topTmpDir = createTempDir(buildDir, "nix-build-" + std::string(drvPath.name()), 0700); + setBuildTmpDir(); + assert(!tmpDir.empty()); /* The TOCTOU between the previous mkdir call and this open call is unavoidable due to POSIX semantics.*/ @@ -1005,236 +834,11 @@ void DerivationBuilderImpl::startBuilder() } } - if (useChroot) { - - /* Allow a user-configurable set of directories from the - host file system. */ - pathsInChroot.clear(); - - for (auto i : settings.sandboxPaths.get()) { - if (i.empty()) continue; - bool optional = false; - if (i[i.size() - 1] == '?') { - optional = true; - i.pop_back(); - } - size_t p = i.find('='); - - std::string inside, outside; - if (p == std::string::npos) { - inside = i; - outside = i; - } else { - inside = i.substr(0, p); - outside = i.substr(p + 1); - } - - if (!optional && !maybeLstat(outside)) { - throw SysError("path '%s' is configured as part of the `sandbox-paths` option, but is inaccessible", outside); - } - - pathsInChroot[inside] = {outside, optional}; - } - if (hasPrefix(store.storeDir, tmpDirInSandbox)) - { - throw Error("`sandbox-build-dir` must not contain the storeDir"); - } - pathsInChroot[tmpDirInSandbox] = tmpDir; - - /* Add the closure of store paths to the chroot. */ - StorePathSet closure; - for (auto & i : pathsInChroot) - try { - if (store.isInStore(i.second.source)) - store.computeFSClosure(store.toStorePath(i.second.source).first, closure); - } catch (InvalidPath & e) { - } catch (Error & e) { - e.addTrace({}, "while processing 'sandbox-paths'"); - throw; - } - for (auto & i : closure) { - auto p = store.printStorePath(i); - pathsInChroot.insert_or_assign(p, p); - } - - PathSet allowedPaths = settings.allowedImpureHostPrefixes; - - /* This works like the above, except on a per-derivation level */ - auto impurePaths = drvOptions.impureHostDeps; - - for (auto & i : impurePaths) { - bool found = false; - /* Note: we're not resolving symlinks here to prevent - giving a non-root user info about inaccessible - files. */ - Path canonI = canonPath(i); - /* If only we had a trie to do this more efficiently :) luckily, these are generally going to be pretty small */ - for (auto & a : allowedPaths) { - Path canonA = canonPath(a); - if (isDirOrInDir(canonI, canonA)) { - found = true; - break; - } - } - if (!found) - throw Error("derivation '%s' requested impure path '%s', but it was not in allowed-impure-host-deps", - store.printStorePath(drvPath), i); - - /* Allow files in drvOptions.impureHostDeps to be missing; e.g. - macOS 11+ has no /usr/lib/libSystem*.dylib */ - pathsInChroot[i] = {i, true}; - } - -#ifdef __linux__ - /* Create a temporary directory in which we set up the chroot - environment using bind-mounts. We put it in the Nix store - so that the build outputs can be moved efficiently from the - chroot to their final location. */ - auto chrootParentDir = store.Store::toRealPath(drvPath) + ".chroot"; - deletePath(chrootParentDir); - - /* Clean up the chroot directory automatically. */ - autoDelChroot = std::make_shared(chrootParentDir); - - printMsg(lvlChatty, "setting up chroot environment in '%1%'", chrootParentDir); - - if (mkdir(chrootParentDir.c_str(), 0700) == -1) - throw SysError("cannot create '%s'", chrootRootDir); - - chrootRootDir = chrootParentDir + "/root"; - - if (mkdir(chrootRootDir.c_str(), buildUser && buildUser->getUIDCount() != 1 ? 0755 : 0750) == -1) - throw SysError("cannot create '%1%'", chrootRootDir); - - if (buildUser && chown(chrootRootDir.c_str(), buildUser->getUIDCount() != 1 ? buildUser->getUID() : 0, buildUser->getGID()) == -1) - throw SysError("cannot change ownership of '%1%'", chrootRootDir); - - /* Create a writable /tmp in the chroot. Many builders need - this. (Of course they should really respect $TMPDIR - instead.) */ - Path chrootTmpDir = chrootRootDir + "/tmp"; - createDirs(chrootTmpDir); - chmod_(chrootTmpDir, 01777); - - /* Create a /etc/passwd with entries for the build user and the - nobody account. The latter is kind of a hack to support - Samba-in-QEMU. */ - createDirs(chrootRootDir + "/etc"); - if (drvOptions.useUidRange(drv)) - chownToBuilder(chrootRootDir + "/etc"); - - if (drvOptions.useUidRange(drv) && (!buildUser || buildUser->getUIDCount() < 65536)) - throw Error("feature 'uid-range' requires the setting '%s' to be enabled", settings.autoAllocateUids.name); - - /* Declare the build user's group so that programs get a consistent - view of the system (e.g., "id -gn"). */ - writeFile(chrootRootDir + "/etc/group", - fmt("root:x:0:\n" - "nixbld:!:%1%:\n" - "nogroup:x:65534:\n", sandboxGid())); - - /* Create /etc/hosts with localhost entry. */ - if (derivationType->isSandboxed()) - writeFile(chrootRootDir + "/etc/hosts", "127.0.0.1 localhost\n::1 localhost\n"); - - /* Make the closure of the inputs available in the chroot, - rather than the whole Nix store. This prevents any access - to undeclared dependencies. Directories are bind-mounted, - while other inputs are hard-linked (since only directories - can be bind-mounted). !!! As an extra security - precaution, make the fake Nix store only writable by the - build user. */ - Path chrootStoreDir = chrootRootDir + store.storeDir; - createDirs(chrootStoreDir); - chmod_(chrootStoreDir, 01775); - - if (buildUser && chown(chrootStoreDir.c_str(), 0, buildUser->getGID()) == -1) - throw SysError("cannot change ownership of '%1%'", chrootStoreDir); - - for (auto & i : inputPaths) { - auto p = store.printStorePath(i); - Path r = store.toRealPath(p); - pathsInChroot.insert_or_assign(p, r); - } - - /* If we're repairing, checking or rebuilding part of a - multiple-outputs derivation, it's possible that we're - rebuilding a path that is in settings.sandbox-paths - (typically the dependencies of /bin/sh). Throw them - out. */ - for (auto & i : drv.outputsAndOptPaths(store)) { - /* If the name isn't known a priori (i.e. floating - content-addressing derivation), the temporary location we use - should be fresh. Freshness means it is impossible that the path - is already in the sandbox, so we don't need to worry about - removing it. */ - if (i.second.second) - pathsInChroot.erase(store.printStorePath(*i.second.second)); - } - - if (cgroup) { - if (mkdir(cgroup->c_str(), 0755) != 0) - throw SysError("creating cgroup '%s'", *cgroup); - chownToBuilder(*cgroup); - chownToBuilder(*cgroup + "/cgroup.procs"); - chownToBuilder(*cgroup + "/cgroup.threads"); - //chownToBuilder(*cgroup + "/cgroup.subtree_control"); - } - -#else - if (drvOptions.useUidRange(drv)) - throw Error("feature 'uid-range' is not supported on this platform"); - #ifdef __APPLE__ - /* We don't really have any parent prep work to do (yet?) - All work happens in the child, instead. */ - #else - throw Error("sandboxing builds is not supported on this platform"); - #endif -#endif - } else { - if (drvOptions.useUidRange(drv)) - throw Error("feature 'uid-range' is only supported in sandboxed builds"); - } + prepareSandbox(); if (needsHashRewrite() && pathExists(homeDir)) throw Error("home directory '%1%' exists; please remove it to assure purity of builds without sandboxing", homeDir); - if (useChroot && settings.preBuildHook != "") { - printMsg(lvlChatty, "executing pre-build hook '%1%'", settings.preBuildHook); - auto args = useChroot ? Strings({store.printStorePath(drvPath), chrootRootDir}) : - Strings({ store.printStorePath(drvPath) }); - enum BuildHookState { - stBegin, - stExtraChrootDirs - }; - auto state = stBegin; - auto lines = runProgram(settings.preBuildHook, false, args); - auto lastPos = std::string::size_type{0}; - for (auto nlPos = lines.find('\n'); nlPos != std::string::npos; - nlPos = lines.find('\n', lastPos)) - { - auto line = lines.substr(lastPos, nlPos - lastPos); - lastPos = nlPos + 1; - if (state == stBegin) { - if (line == "extra-sandbox-paths" || line == "extra-chroot-dirs") { - state = stExtraChrootDirs; - } else { - throw Error("unknown pre-build hook command '%1%'", line); - } - } else if (state == stExtraChrootDirs) { - if (line == "") { - state = stBegin; - } else { - auto p = line.find('='); - if (p == std::string::npos) - pathsInChroot[line] = line; - else - pathsInChroot[line.substr(0, p)] = line.substr(p + 1); - } - } - } - } - /* Fire up a Nix daemon to process recursive Nix calls from the builder. */ if (drvOptions.getRequiredSystemFeatures(drv).count("recursive-nix")) @@ -1247,7 +851,7 @@ void DerivationBuilderImpl::startBuilder() printMsg(lvlVomit, "setting builder env variable '%1%'='%2%'", i.first, i.second); /* Create the log file. */ - [[maybe_unused]] Path logFile = miscMethods->openLogFile(); + miscMethods->openLogFile(); /* Create a pseudoterminal to get the output of the builder. */ builderOut = posix_openpt(O_RDWR | O_NOCTTY); @@ -1274,194 +878,168 @@ void DerivationBuilderImpl::startBuilder() if (unlockpt(builderOut.get())) throw SysError("unlocking pseudoterminal"); - /* Open the slave side of the pseudoterminal and use it as stderr. */ - auto openSlave = [&]() - { - AutoCloseFD builderOut = open(slaveName.c_str(), O_RDWR | O_NOCTTY); - if (!builderOut) - throw SysError("opening pseudoterminal slave"); - - // Put the pt into raw mode to prevent \n -> \r\n translation. - struct termios term; - if (tcgetattr(builderOut.get(), &term)) - throw SysError("getting pseudoterminal attributes"); - - cfmakeraw(&term); - - if (tcsetattr(builderOut.get(), TCSANOW, &term)) - throw SysError("putting pseudoterminal into raw mode"); - - if (dup2(builderOut.get(), STDERR_FILENO) == -1) - throw SysError("cannot pipe standard error into log file"); - }; - buildResult.startTime = time(0); - /* Fork a child to build the package. */ + /* Start a child process to build the derivation. */ + startChild(); -#ifdef __linux__ - if (useChroot) { - /* Set up private namespaces for the build: - - - The PID namespace causes the build to start as PID 1. - Processes outside of the chroot are not visible to those - on the inside, but processes inside the chroot are - visible from the outside (though with different PIDs). - - - The private mount namespace ensures that all the bind - mounts we do will only show up in this process and its - children, and will disappear automatically when we're - done. - - - The private network namespace ensures that the builder - cannot talk to the outside world (or vice versa). It - only has a private loopback interface. (Fixed-output - derivations are not run in a private network namespace - to allow functions like fetchurl to work.) - - - The IPC namespace prevents the builder from communicating - with outside processes using SysV IPC mechanisms (shared - memory, message queues, semaphores). It also ensures - that all IPC objects are destroyed when the builder - exits. - - - The UTS namespace ensures that builders see a hostname of - localhost rather than the actual hostname. - - We use a helper process to do the clone() to work around - clone() being broken in multi-threaded programs due to - at-fork handlers not being run. Note that we use - CLONE_PARENT to ensure that the real builder is parented to - us. - */ - - userNamespaceSync.create(); - - usingUserNamespace = userNamespacesSupported(); - - Pipe sendPid; - sendPid.create(); - - Pid helper = startProcess([&]() { - sendPid.readSide.close(); - - /* We need to open the slave early, before - CLONE_NEWUSER. Otherwise we get EPERM when running as - root. */ - openSlave(); - - try { - /* Drop additional groups here because we can't do it - after we've created the new user namespace. */ - if (setgroups(0, 0) == -1) { - if (errno != EPERM) - throw SysError("setgroups failed"); - if (settings.requireDropSupplementaryGroups) - throw Error("setgroups failed. Set the require-drop-supplementary-groups option to false to skip this step."); - } - - ProcessOptions options; - options.cloneFlags = CLONE_NEWPID | CLONE_NEWNS | CLONE_NEWIPC | CLONE_NEWUTS | CLONE_PARENT | SIGCHLD; - if (derivationType->isSandboxed()) - options.cloneFlags |= CLONE_NEWNET; - if (usingUserNamespace) - options.cloneFlags |= CLONE_NEWUSER; - - pid_t child = startProcess([&]() { runChild(); }, options); - - writeFull(sendPid.writeSide.get(), fmt("%d\n", child)); - _exit(0); - } catch (...) { - handleChildException(true); - _exit(1); - } - }); - - sendPid.writeSide.close(); - - if (helper.wait() != 0) { - processSandboxSetupMessages(); - // Only reached if the child process didn't send an exception. - throw Error("unable to start build process"); - } - - userNamespaceSync.readSide = -1; - - /* Close the write side to prevent runChild() from hanging - reading from this. */ - Finally cleanup([&]() { - userNamespaceSync.writeSide = -1; - }); - - auto ss = tokenizeString>(readLine(sendPid.readSide.get())); - assert(ss.size() == 1); - pid = string2Int(ss[0]).value(); - - if (usingUserNamespace) { - /* Set the UID/GID mapping of the builder's user namespace - such that the sandbox user maps to the build user, or to - the calling user (if build users are disabled). */ - uid_t hostUid = buildUser ? buildUser->getUID() : getuid(); - uid_t hostGid = buildUser ? buildUser->getGID() : getgid(); - uid_t nrIds = buildUser ? buildUser->getUIDCount() : 1; - - writeFile("/proc/" + std::to_string(pid) + "/uid_map", - fmt("%d %d %d", sandboxUid(), hostUid, nrIds)); - - if (!buildUser || buildUser->getUIDCount() == 1) - writeFile("/proc/" + std::to_string(pid) + "/setgroups", "deny"); - - writeFile("/proc/" + std::to_string(pid) + "/gid_map", - fmt("%d %d %d", sandboxGid(), hostGid, nrIds)); - } else { - debug("note: not using a user namespace"); - if (!buildUser) - throw Error("cannot perform a sandboxed build because user namespaces are not enabled; check /proc/sys/user/max_user_namespaces"); - } - - /* Now that we now the sandbox uid, we can write - /etc/passwd. */ - writeFile(chrootRootDir + "/etc/passwd", fmt( - "root:x:0:0:Nix build user:%3%:/noshell\n" - "nixbld:x:%1%:%2%:Nix build user:%3%:/noshell\n" - "nobody:x:65534:65534:Nobody:/:/noshell\n", - sandboxUid(), sandboxGid(), settings.sandboxBuildDir)); - - /* Save the mount- and user namespace of the child. We have to do this - *before* the child does a chroot. */ - sandboxMountNamespace = open(fmt("/proc/%d/ns/mnt", (pid_t) pid).c_str(), O_RDONLY); - if (sandboxMountNamespace.get() == -1) - throw SysError("getting sandbox mount namespace"); - - if (usingUserNamespace) { - sandboxUserNamespace = open(fmt("/proc/%d/ns/user", (pid_t) pid).c_str(), O_RDONLY); - if (sandboxUserNamespace.get() == -1) - throw SysError("getting sandbox user namespace"); - } - - /* Move the child into its own cgroup. */ - if (cgroup) - writeFile(*cgroup + "/cgroup.procs", fmt("%d", (pid_t) pid)); - - /* Signal the builder that we've updated its user namespace. */ - writeFull(userNamespaceSync.writeSide.get(), "1"); - - } else -#endif - { - pid = startProcess([&]() { - openSlave(); - runChild(); - }); - } - - /* parent */ pid.setSeparatePG(true); miscMethods->childStarted(builderOut.get()); processSandboxSetupMessages(); } +DerivationBuilderImpl::PathsInChroot DerivationBuilderImpl::getPathsInSandbox() +{ + PathsInChroot pathsInChroot; + + /* Allow a user-configurable set of directories from the + host file system. */ + for (auto i : settings.sandboxPaths.get()) { + if (i.empty()) continue; + bool optional = false; + if (i[i.size() - 1] == '?') { + optional = true; + i.pop_back(); + } + + size_t p = i.find('='); + std::string inside, outside; + if (p == std::string::npos) { + inside = i; + outside = i; + } else { + inside = i.substr(0, p); + outside = i.substr(p + 1); + } + + if (!optional && !maybeLstat(outside)) + throw SysError("path '%s' is configured as part of the `sandbox-paths` option, but is inaccessible", outside); + + pathsInChroot[inside] = {outside, optional}; + } + + if (hasPrefix(store.storeDir, tmpDirInSandbox())) + throw Error("`sandbox-build-dir` must not contain the storeDir"); + + pathsInChroot[tmpDirInSandbox()] = tmpDir; + + /* Add the closure of store paths to the chroot. */ + StorePathSet closure; + for (auto & i : pathsInChroot) + try { + if (store.isInStore(i.second.source)) + store.computeFSClosure(store.toStorePath(i.second.source).first, closure); + } catch (InvalidPath & e) { + } catch (Error & e) { + e.addTrace({}, "while processing sandbox path '%s'", i.second.source); + throw; + } + for (auto & i : closure) { + auto p = store.printStorePath(i); + pathsInChroot.insert_or_assign(p, p); + } + + PathSet allowedPaths = settings.allowedImpureHostPrefixes; + + /* This works like the above, except on a per-derivation level */ + auto impurePaths = drvOptions.impureHostDeps; + + for (auto & i : impurePaths) { + bool found = false; + /* Note: we're not resolving symlinks here to prevent + giving a non-root user info about inaccessible + files. */ + Path canonI = canonPath(i); + /* If only we had a trie to do this more efficiently :) luckily, these are generally going to be pretty small */ + for (auto & a : allowedPaths) { + Path canonA = canonPath(a); + if (isDirOrInDir(canonI, canonA)) { + found = true; + break; + } + } + if (!found) + throw Error("derivation '%s' requested impure path '%s', but it was not in allowed-impure-host-deps", + store.printStorePath(drvPath), i); + + /* Allow files in drvOptions.impureHostDeps to be missing; e.g. + macOS 11+ has no /usr/lib/libSystem*.dylib */ + pathsInChroot[i] = {i, true}; + } + + if (settings.preBuildHook != "") { + printMsg(lvlChatty, "executing pre-build hook '%1%'", settings.preBuildHook); + enum BuildHookState { + stBegin, + stExtraChrootDirs + }; + auto state = stBegin; + auto lines = runProgram(settings.preBuildHook, false, getPreBuildHookArgs()); + auto lastPos = std::string::size_type{0}; + for (auto nlPos = lines.find('\n'); nlPos != std::string::npos; + nlPos = lines.find('\n', lastPos)) + { + auto line = lines.substr(lastPos, nlPos - lastPos); + lastPos = nlPos + 1; + if (state == stBegin) { + if (line == "extra-sandbox-paths" || line == "extra-chroot-dirs") { + state = stExtraChrootDirs; + } else { + throw Error("unknown pre-build hook command '%1%'", line); + } + } else if (state == stExtraChrootDirs) { + if (line == "") { + state = stBegin; + } else { + auto p = line.find('='); + if (p == std::string::npos) + pathsInChroot[line] = line; + else + pathsInChroot[line.substr(0, p)] = line.substr(p + 1); + } + } + } + } + + return pathsInChroot; +} + +void DerivationBuilderImpl::prepareSandbox() +{ + if (drvOptions.useUidRange(drv)) + throw Error("feature 'uid-range' is not supported on this platform"); +} + +void DerivationBuilderImpl::openSlave() +{ + std::string slaveName = ptsname(builderOut.get()); + + AutoCloseFD builderOut = open(slaveName.c_str(), O_RDWR | O_NOCTTY); + if (!builderOut) + throw SysError("opening pseudoterminal slave"); + + // Put the pt into raw mode to prevent \n -> \r\n translation. + struct termios term; + if (tcgetattr(builderOut.get(), &term)) + throw SysError("getting pseudoterminal attributes"); + + cfmakeraw(&term); + + if (tcsetattr(builderOut.get(), TCSANOW, &term)) + throw SysError("putting pseudoterminal into raw mode"); + + if (dup2(builderOut.get(), STDERR_FILENO) == -1) + throw SysError("cannot pipe standard error into log file"); +} + +void DerivationBuilderImpl::startChild() +{ + pid = startProcess([&]() { + openSlave(); + runChild(); + }); +} void DerivationBuilderImpl::processSandboxSetupMessages() { @@ -1491,49 +1069,6 @@ void DerivationBuilderImpl::processSandboxSetupMessages() } } - -void DerivationBuilderImpl::initTmpDir() -{ - /* In a sandbox, for determinism, always use the same temporary - directory. */ -#ifdef __linux__ - tmpDirInSandbox = useChroot ? settings.sandboxBuildDir : tmpDir; -#else - tmpDirInSandbox = tmpDir; -#endif - - /* In non-structured mode, set all bindings either directory in the - environment or via a file, as specified by - `DerivationOptions::passAsFile`. */ - if (!parsedDrv) { - for (auto & i : drv.env) { - if (drvOptions.passAsFile.find(i.first) == drvOptions.passAsFile.end()) { - env[i.first] = i.second; - } else { - auto hash = hashString(HashAlgorithm::SHA256, i.first); - std::string fn = ".attr-" + hash.to_string(HashFormat::Nix32, false); - writeBuilderFile(fn, rewriteStrings(i.second, inputRewrites)); - env[i.first + "Path"] = tmpDirInSandbox + "/" + fn; - } - } - - } - - /* For convenience, set an environment pointing to the top build - directory. */ - env["NIX_BUILD_TOP"] = tmpDirInSandbox; - - /* Also set TMPDIR and variants to point to this directory. */ - env["TMPDIR"] = env["TEMPDIR"] = env["TMP"] = env["TEMP"] = tmpDirInSandbox; - - /* Explicitly set PWD to prevent problems with chroot builds. In - particular, dietlibc cannot figure out the cwd because the - inode of the current directory doesn't appear in .. (because - getdents returns the inode of the mount point). */ - env["PWD"] = tmpDirInSandbox; -} - - void DerivationBuilderImpl::initEnv() { env.clear(); @@ -1560,13 +1095,40 @@ void DerivationBuilderImpl::initEnv() /* The maximum number of cores to utilize for parallel building. */ env["NIX_BUILD_CORES"] = fmt("%d", settings.buildCores); - initTmpDir(); + /* In non-structured mode, set all bindings either directory in the + environment or via a file, as specified by + `DerivationOptions::passAsFile`. */ + if (!parsedDrv) { + for (auto & i : drv.env) { + if (drvOptions.passAsFile.find(i.first) == drvOptions.passAsFile.end()) { + env[i.first] = i.second; + } else { + auto hash = hashString(HashAlgorithm::SHA256, i.first); + std::string fn = ".attr-" + hash.to_string(HashFormat::Nix32, false); + writeBuilderFile(fn, rewriteStrings(i.second, inputRewrites)); + env[i.first + "Path"] = tmpDirInSandbox() + "/" + fn; + } + } + } + + /* For convenience, set an environment pointing to the top build + directory. */ + env["NIX_BUILD_TOP"] = tmpDirInSandbox(); + + /* Also set TMPDIR and variants to point to this directory. */ + env["TMPDIR"] = env["TEMPDIR"] = env["TMP"] = env["TEMP"] = tmpDirInSandbox(); + + /* Explicitly set PWD to prevent problems with chroot builds. In + particular, dietlibc cannot figure out the cwd because the + inode of the current directory doesn't appear in .. (because + getdents returns the inode of the mount point). */ + env["PWD"] = tmpDirInSandbox(); /* Compatibility hack with Nix <= 0.7: if this is a fixed-output derivation, tell the builder, so that for instance `fetchurl' can skip checking the output. On older Nixes, this environment variable won't be set, so `fetchurl' will do the check. */ - if (derivationType->isFixed()) env["NIX_OUTPUT_CHECKED"] = "1"; + if (derivationType.isFixed()) env["NIX_OUTPUT_CHECKED"] = "1"; /* *Only* if this is a fixed-output derivation, propagate the values of the environment variables specified in the @@ -1577,7 +1139,7 @@ void DerivationBuilderImpl::initEnv() to the builder is generally impure, but the output of fixed-output derivations is by definition pure (since we already know the cryptographic hash of the output). */ - if (!derivationType->isSandboxed()) { + if (!derivationType.isSandboxed()) { auto & impureEnv = settings.impureEnv.get(); if (!impureEnv.empty()) experimentalFeatureSettings.require(Xp::ConfigurableImpureEnv); @@ -1622,9 +1184,9 @@ void DerivationBuilderImpl::writeStructuredAttrs() auto jsonSh = StructuredAttrs::writeShell(json); writeBuilderFile(".attrs.sh", rewriteStrings(jsonSh, inputRewrites)); - env["NIX_ATTRS_SH_FILE"] = tmpDirInSandbox + "/.attrs.sh"; + env["NIX_ATTRS_SH_FILE"] = tmpDirInSandbox() + "/.attrs.sh"; writeBuilderFile(".attrs.json", rewriteStrings(json.dump(), inputRewrites)); - env["NIX_ATTRS_JSON_FILE"] = tmpDirInSandbox + "/.attrs.json"; + env["NIX_ATTRS_JSON_FILE"] = tmpDirInSandbox() + "/.attrs.json"; } } @@ -1635,7 +1197,7 @@ void DerivationBuilderImpl::startDaemon() auto store = makeRestrictedStore( [&]{ - auto config = make_ref(*getLocalStore().config); + auto config = make_ref(*getLocalStore(this->store).config); config->pathInfoCacheSize = 0; config->stateDir = "/no-such-path"; config->logDir = "/no-such-path"; @@ -1648,7 +1210,7 @@ void DerivationBuilderImpl::startDaemon() auto socketName = ".nix-socket"; Path socketPath = tmpDir + "/" + socketName; - env["NIX_REMOTE"] = "unix://" + tmpDirInSandbox + "/" + socketName; + env["NIX_REMOTE"] = "unix://" + tmpDirInSandbox() + "/" + socketName; daemonSocket = createUnixDomainSocket(socketPath, 0600); @@ -1735,51 +1297,6 @@ void DerivationBuilderImpl::addDependency(const StorePath & path) if (isAllowed(path)) return; addedPaths.insert(path); - - /* If we're doing a sandbox build, then we have to make the path - appear in the sandbox. */ - if (useChroot) { - - debug("materialising '%s' in the sandbox", store.printStorePath(path)); - - #ifdef __linux__ - - Path source = store.Store::toRealPath(path); - Path target = chrootRootDir + store.printStorePath(path); - - if (pathExists(target)) { - // There is a similar debug message in doBind, so only run it in this block to not have double messages. - debug("bind-mounting %s -> %s", target, source); - throw Error("store path '%s' already exists in the sandbox", store.printStorePath(path)); - } - - /* Bind-mount the path into the sandbox. This requires - entering its mount namespace, which is not possible - in multithreaded programs. So we do this in a - child process.*/ - Pid child(startProcess([&]() { - - if (usingUserNamespace && (setns(sandboxUserNamespace.get(), 0) == -1)) - throw SysError("entering sandbox user namespace"); - - if (setns(sandboxMountNamespace.get(), 0) == -1) - throw SysError("entering sandbox mount namespace"); - - doBind(source, target); - - _exit(0); - })); - - int status = child.wait(); - if (status != 0) - throw Error("could not add path '%s' to sandbox", store.printStorePath(path)); - - #else - throw Error("don't know how to make path '%s' (produced by a recursive Nix call) appear in the sandbox", - store.printStorePath(path)); - #endif - - } } void DerivationBuilderImpl::chownToBuilder(const Path & path) @@ -1789,94 +1306,6 @@ void DerivationBuilderImpl::chownToBuilder(const Path & path) throw SysError("cannot change ownership of '%1%'", path); } - -void setupSeccomp() -{ -#ifdef __linux__ - if (!settings.filterSyscalls) return; -#if HAVE_SECCOMP - scmp_filter_ctx ctx; - - if (!(ctx = seccomp_init(SCMP_ACT_ALLOW))) - throw SysError("unable to initialize seccomp mode 2"); - - Finally cleanup([&]() { - seccomp_release(ctx); - }); - - constexpr std::string_view nativeSystem = NIX_LOCAL_SYSTEM; - - if (nativeSystem == "x86_64-linux" && - seccomp_arch_add(ctx, SCMP_ARCH_X86) != 0) - throw SysError("unable to add 32-bit seccomp architecture"); - - if (nativeSystem == "x86_64-linux" && - seccomp_arch_add(ctx, SCMP_ARCH_X32) != 0) - throw SysError("unable to add X32 seccomp architecture"); - - if (nativeSystem == "aarch64-linux" && - seccomp_arch_add(ctx, SCMP_ARCH_ARM) != 0) - printError("unable to add ARM seccomp architecture; this may result in spurious build failures if running 32-bit ARM processes"); - - if (nativeSystem == "mips64-linux" && - seccomp_arch_add(ctx, SCMP_ARCH_MIPS) != 0) - printError("unable to add mips seccomp architecture"); - - if (nativeSystem == "mips64-linux" && - seccomp_arch_add(ctx, SCMP_ARCH_MIPS64N32) != 0) - printError("unable to add mips64-*abin32 seccomp architecture"); - - if (nativeSystem == "mips64el-linux" && - seccomp_arch_add(ctx, SCMP_ARCH_MIPSEL) != 0) - printError("unable to add mipsel seccomp architecture"); - - if (nativeSystem == "mips64el-linux" && - seccomp_arch_add(ctx, SCMP_ARCH_MIPSEL64N32) != 0) - printError("unable to add mips64el-*abin32 seccomp architecture"); - - /* Prevent builders from creating setuid/setgid binaries. */ - for (int perm : { S_ISUID, S_ISGID }) { - if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(chmod), 1, - SCMP_A1(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0) - throw SysError("unable to add seccomp rule"); - - if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmod), 1, - SCMP_A1(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0) - throw SysError("unable to add seccomp rule"); - - if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmodat), 1, - SCMP_A2(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0) - throw SysError("unable to add seccomp rule"); - - if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), NIX_SYSCALL_FCHMODAT2, 1, - SCMP_A2(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0) - throw SysError("unable to add seccomp rule"); - } - - /* Prevent builders from using EAs or ACLs. Not all filesystems - support these, and they're not allowed in the Nix store because - they're not representable in the NAR serialisation. */ - if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(getxattr), 0) != 0 || - seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(lgetxattr), 0) != 0 || - seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(fgetxattr), 0) != 0 || - seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(setxattr), 0) != 0 || - seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(lsetxattr), 0) != 0 || - seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(fsetxattr), 0) != 0) - throw SysError("unable to add seccomp rule"); - - if (seccomp_attr_set(ctx, SCMP_FLTATR_CTL_NNP, settings.allowNewPrivileges ? 0 : 1) != 0) - throw SysError("unable to set 'no new privileges' seccomp attribute"); - - if (seccomp_load(ctx) != 0) - throw SysError("unable to load seccomp BPF program"); -#else - throw Error( - "seccomp is not supported on this platform; " - "you can bypass this error by setting the option 'filter-syscalls' to false, but note that untrusted builds can then create setuid binaries!"); -#endif -#endif -} - void DerivationBuilderImpl::chownToBuilder(int fd, const Path & path) { if (!buildUser) return; @@ -1907,20 +1336,12 @@ void DerivationBuilderImpl::runChild() commonChildInit(); - try { - setupSeccomp(); - } catch (...) { - if (buildUser) throw; - } - - bool setUser = true; - /* Make the contents of netrc and the CA certificate bundle available to builtin:fetchurl (which may run under a different uid and/or in a sandbox). */ BuiltinBuilderContext ctx{ .drv = drv, - .tmpDirInSandbox = tmpDirInSandbox, + .tmpDirInSandbox = tmpDirInSandbox(), }; if (drv.isBuiltin() && drv.builder == "builtin:fetchurl") { @@ -1933,419 +1354,28 @@ void DerivationBuilderImpl::runChild() } catch (SystemError &) { } } -#ifdef __linux__ - if (useChroot) { + enterChroot(); - userNamespaceSync.writeSide = -1; - - if (drainFD(userNamespaceSync.readSide.get()) != "1") - throw Error("user namespace initialisation failed"); - - userNamespaceSync.readSide = -1; - - if (derivationType->isSandboxed()) { - - /* Initialise the loopback interface. */ - AutoCloseFD fd(socket(PF_INET, SOCK_DGRAM, IPPROTO_IP)); - if (!fd) throw SysError("cannot open IP socket"); - - struct ifreq ifr; - strcpy(ifr.ifr_name, "lo"); - ifr.ifr_flags = IFF_UP | IFF_LOOPBACK | IFF_RUNNING; - if (ioctl(fd.get(), SIOCSIFFLAGS, &ifr) == -1) - throw SysError("cannot set loopback interface flags"); - } - - /* Set the hostname etc. to fixed values. */ - char hostname[] = "localhost"; - if (sethostname(hostname, sizeof(hostname)) == -1) - throw SysError("cannot set host name"); - char domainname[] = "(none)"; // kernel default - if (setdomainname(domainname, sizeof(domainname)) == -1) - throw SysError("cannot set domain name"); - - /* Make all filesystems private. This is necessary - because subtrees may have been mounted as "shared" - (MS_SHARED). (Systemd does this, for instance.) Even - though we have a private mount namespace, mounting - filesystems on top of a shared subtree still propagates - outside of the namespace. Making a subtree private is - local to the namespace, though, so setting MS_PRIVATE - does not affect the outside world. */ - if (mount(0, "/", 0, MS_PRIVATE | MS_REC, 0) == -1) - throw SysError("unable to make '/' private"); - - /* Bind-mount chroot directory to itself, to treat it as a - different filesystem from /, as needed for pivot_root. */ - if (mount(chrootRootDir.c_str(), chrootRootDir.c_str(), 0, MS_BIND, 0) == -1) - throw SysError("unable to bind mount '%1%'", chrootRootDir); - - /* Bind-mount the sandbox's Nix store onto itself so that - we can mark it as a "shared" subtree, allowing bind - mounts made in *this* mount namespace to be propagated - into the child namespace created by the - unshare(CLONE_NEWNS) call below. - - Marking chrootRootDir as MS_SHARED causes pivot_root() - to fail with EINVAL. Don't know why. */ - Path chrootStoreDir = chrootRootDir + store.storeDir; - - if (mount(chrootStoreDir.c_str(), chrootStoreDir.c_str(), 0, MS_BIND, 0) == -1) - throw SysError("unable to bind mount the Nix store", chrootStoreDir); - - if (mount(0, chrootStoreDir.c_str(), 0, MS_SHARED, 0) == -1) - throw SysError("unable to make '%s' shared", chrootStoreDir); - - /* Set up a nearly empty /dev, unless the user asked to - bind-mount the host /dev. */ - Strings ss; - if (pathsInChroot.find("/dev") == pathsInChroot.end()) { - createDirs(chrootRootDir + "/dev/shm"); - createDirs(chrootRootDir + "/dev/pts"); - ss.push_back("/dev/full"); - if (store.config.systemFeatures.get().count("kvm") && pathExists("/dev/kvm")) - ss.push_back("/dev/kvm"); - ss.push_back("/dev/null"); - ss.push_back("/dev/random"); - ss.push_back("/dev/tty"); - ss.push_back("/dev/urandom"); - ss.push_back("/dev/zero"); - createSymlink("/proc/self/fd", chrootRootDir + "/dev/fd"); - createSymlink("/proc/self/fd/0", chrootRootDir + "/dev/stdin"); - createSymlink("/proc/self/fd/1", chrootRootDir + "/dev/stdout"); - createSymlink("/proc/self/fd/2", chrootRootDir + "/dev/stderr"); - } - - /* Fixed-output derivations typically need to access the - network, so give them access to /etc/resolv.conf and so - on. */ - if (!derivationType->isSandboxed()) { - // Only use nss functions to resolve hosts and - // services. Don’t use it for anything else that may - // be configured for this system. This limits the - // potential impurities introduced in fixed-outputs. - writeFile(chrootRootDir + "/etc/nsswitch.conf", "hosts: files dns\nservices: files\n"); - - /* N.B. it is realistic that these paths might not exist. It - happens when testing Nix building fixed-output derivations - within a pure derivation. */ - for (auto & path : { "/etc/resolv.conf", "/etc/services", "/etc/hosts" }) - if (pathExists(path)) - ss.push_back(path); - - if (settings.caFile != "") { - Path caFile = settings.caFile; - if (pathExists(caFile)) - pathsInChroot.try_emplace("/etc/ssl/certs/ca-certificates.crt", canonPath(caFile, true), true); - } - } - - for (auto & i : ss) { - // For backwards-compatibiliy, resolve all the symlinks in the - // chroot paths - auto canonicalPath = canonPath(i, true); - pathsInChroot.emplace(i, canonicalPath); - } - - /* Bind-mount all the directories from the "host" - filesystem that we want in the chroot - environment. */ - for (auto & i : pathsInChroot) { - if (i.second.source == "/proc") continue; // backwards compatibility - - #if HAVE_EMBEDDED_SANDBOX_SHELL - if (i.second.source == "__embedded_sandbox_shell__") { - static unsigned char sh[] = { - #include "embedded-sandbox-shell.gen.hh" - }; - auto dst = chrootRootDir + i.first; - createDirs(dirOf(dst)); - writeFile(dst, std::string_view((const char *) sh, sizeof(sh))); - chmod_(dst, 0555); - } else - #endif - doBind(i.second.source, chrootRootDir + i.first, i.second.optional); - } - - /* Bind a new instance of procfs on /proc. */ - createDirs(chrootRootDir + "/proc"); - if (mount("none", (chrootRootDir + "/proc").c_str(), "proc", 0, 0) == -1) - throw SysError("mounting /proc"); - - /* Mount sysfs on /sys. */ - if (buildUser && buildUser->getUIDCount() != 1) { - createDirs(chrootRootDir + "/sys"); - if (mount("none", (chrootRootDir + "/sys").c_str(), "sysfs", 0, 0) == -1) - throw SysError("mounting /sys"); - } - - /* Mount a new tmpfs on /dev/shm to ensure that whatever - the builder puts in /dev/shm is cleaned up automatically. */ - if (pathExists("/dev/shm") && mount("none", (chrootRootDir + "/dev/shm").c_str(), "tmpfs", 0, - fmt("size=%s", settings.sandboxShmSize).c_str()) == -1) - throw SysError("mounting /dev/shm"); - - /* Mount a new devpts on /dev/pts. Note that this - requires the kernel to be compiled with - CONFIG_DEVPTS_MULTIPLE_INSTANCES=y (which is the case - if /dev/ptx/ptmx exists). */ - if (pathExists("/dev/pts/ptmx") && - !pathExists(chrootRootDir + "/dev/ptmx") - && !pathsInChroot.count("/dev/pts")) - { - if (mount("none", (chrootRootDir + "/dev/pts").c_str(), "devpts", 0, "newinstance,mode=0620") == 0) - { - createSymlink("/dev/pts/ptmx", chrootRootDir + "/dev/ptmx"); - - /* Make sure /dev/pts/ptmx is world-writable. With some - Linux versions, it is created with permissions 0. */ - chmod_(chrootRootDir + "/dev/pts/ptmx", 0666); - } else { - if (errno != EINVAL) - throw SysError("mounting /dev/pts"); - doBind("/dev/pts", chrootRootDir + "/dev/pts"); - doBind("/dev/ptmx", chrootRootDir + "/dev/ptmx"); - } - } - - /* Make /etc unwritable */ - if (!drvOptions.useUidRange(drv)) - chmod_(chrootRootDir + "/etc", 0555); - - /* Unshare this mount namespace. This is necessary because - pivot_root() below changes the root of the mount - namespace. This means that the call to setns() in - addDependency() would hide the host's filesystem, - making it impossible to bind-mount paths from the host - Nix store into the sandbox. Therefore, we save the - pre-pivot_root namespace in - sandboxMountNamespace. Since we made /nix/store a - shared subtree above, this allows addDependency() to - make paths appear in the sandbox. */ - if (unshare(CLONE_NEWNS) == -1) - throw SysError("unsharing mount namespace"); - - /* Unshare the cgroup namespace. This means - /proc/self/cgroup will show the child's cgroup as '/' - rather than whatever it is in the parent. */ - if (cgroup && unshare(CLONE_NEWCGROUP) == -1) - throw SysError("unsharing cgroup namespace"); - - /* Do the chroot(). */ - if (chdir(chrootRootDir.c_str()) == -1) - throw SysError("cannot change directory to '%1%'", chrootRootDir); - - if (mkdir("real-root", 0500) == -1) - throw SysError("cannot create real-root directory"); - - if (pivot_root(".", "real-root") == -1) - throw SysError("cannot pivot old root directory onto '%1%'", (chrootRootDir + "/real-root")); - - if (chroot(".") == -1) - throw SysError("cannot change root directory to '%1%'", chrootRootDir); - - if (umount2("real-root", MNT_DETACH) == -1) - throw SysError("cannot unmount real root filesystem"); - - if (rmdir("real-root") == -1) - throw SysError("cannot remove real-root directory"); - - /* Switch to the sandbox uid/gid in the user namespace, - which corresponds to the build user or calling user in - the parent namespace. */ - if (setgid(sandboxGid()) == -1) - throw SysError("setgid failed"); - if (setuid(sandboxUid()) == -1) - throw SysError("setuid failed"); - - setUser = false; - } -#endif - - if (chdir(tmpDirInSandbox.c_str()) == -1) + if (chdir(tmpDirInSandbox().c_str()) == -1) throw SysError("changing into '%1%'", tmpDir); /* Close all other file descriptors. */ unix::closeExtraFDs(); -#ifdef __linux__ - linux::setPersonality(drv.platform); -#endif - /* Disable core dumps by default. */ struct rlimit limit = { 0, RLIM_INFINITY }; setrlimit(RLIMIT_CORE, &limit); // FIXME: set other limits to deterministic values? - /* Fill in the environment. */ - Strings envStrs; - for (auto & i : env) - envStrs.push_back(rewriteStrings(i.first + "=" + i.second, inputRewrites)); - - /* If we are running in `build-users' mode, then switch to the - user we allocated above. Make sure that we drop all root - privileges. Note that above we have closed all file - descriptors except std*, so that's safe. Also note that - setuid() when run as root sets the real, effective and - saved UIDs. */ - if (setUser && buildUser) { - /* Preserve supplementary groups of the build user, to allow - admins to specify groups such as "kvm". */ - auto gids = buildUser->getSupplementaryGIDs(); - if (setgroups(gids.size(), gids.data()) == -1) - throw SysError("cannot set supplementary groups of build user"); - - if (setgid(buildUser->getGID()) == -1 || - getgid() != buildUser->getGID() || - getegid() != buildUser->getGID()) - throw SysError("setgid failed"); - - if (setuid(buildUser->getUID()) == -1 || - getuid() != buildUser->getUID() || - geteuid() != buildUser->getUID()) - throw SysError("setuid failed"); - } - -#ifdef __APPLE__ - /* This has to appear before import statements. */ - std::string sandboxProfile = "(version 1)\n"; - - if (useChroot) { - - /* Lots and lots and lots of file functions freak out if they can't stat their full ancestry */ - PathSet ancestry; - - /* We build the ancestry before adding all inputPaths to the store because we know they'll - all have the same parents (the store), and there might be lots of inputs. This isn't - particularly efficient... I doubt it'll be a bottleneck in practice */ - for (auto & i : pathsInChroot) { - Path cur = i.first; - while (cur.compare("/") != 0) { - cur = dirOf(cur); - ancestry.insert(cur); - } - } - - /* And we want the store in there regardless of how empty pathsInChroot. We include the innermost - path component this time, since it's typically /nix/store and we care about that. */ - Path cur = store.storeDir; - while (cur.compare("/") != 0) { - ancestry.insert(cur); - cur = dirOf(cur); - } - - /* Add all our input paths to the chroot */ - for (auto & i : inputPaths) { - auto p = store.printStorePath(i); - pathsInChroot[p] = p; - } - - /* Violations will go to the syslog if you set this. Unfortunately the destination does not appear to be configurable */ - if (settings.darwinLogSandboxViolations) { - sandboxProfile += "(deny default)\n"; - } else { - sandboxProfile += "(deny default (with no-log))\n"; - } - - sandboxProfile += - #include "sandbox-defaults.sb" - ; - - if (!derivationType->isSandboxed()) - sandboxProfile += - #include "sandbox-network.sb" - ; - - /* Add the output paths we'll use at build-time to the chroot */ - sandboxProfile += "(allow file-read* file-write* process-exec\n"; - for (auto & [_, path] : scratchOutputs) - sandboxProfile += fmt("\t(subpath \"%s\")\n", store.printStorePath(path)); - - sandboxProfile += ")\n"; - - /* Our inputs (transitive dependencies and any impurities computed above) - - without file-write* allowed, access() incorrectly returns EPERM - */ - sandboxProfile += "(allow file-read* file-write* process-exec\n"; - - // We create multiple allow lists, to avoid exceeding a limit in the darwin sandbox interpreter. - // See https://github.com/NixOS/nix/issues/4119 - // We split our allow groups approximately at half the actual limit, 1 << 16 - const size_t breakpoint = sandboxProfile.length() + (1 << 14); - for (auto & i : pathsInChroot) { - - if (sandboxProfile.length() >= breakpoint) { - debug("Sandbox break: %d %d", sandboxProfile.length(), breakpoint); - sandboxProfile += ")\n(allow file-read* file-write* process-exec\n"; - } - - if (i.first != i.second.source) - throw Error( - "can't map '%1%' to '%2%': mismatched impure paths not supported on Darwin", - i.first, i.second.source); - - std::string path = i.first; - auto optSt = maybeLstat(path.c_str()); - if (!optSt) { - if (i.second.optional) - continue; - throw SysError("getting attributes of required path '%s", path); - } - if (S_ISDIR(optSt->st_mode)) - sandboxProfile += fmt("\t(subpath \"%s\")\n", path); - else - sandboxProfile += fmt("\t(literal \"%s\")\n", path); - } - sandboxProfile += ")\n"; - - /* Allow file-read* on full directory hierarchy to self. Allows realpath() */ - sandboxProfile += "(allow file-read*\n"; - for (auto & i : ancestry) { - sandboxProfile += fmt("\t(literal \"%s\")\n", i); - } - sandboxProfile += ")\n"; - - sandboxProfile += drvOptions.additionalSandboxProfile; - } else - sandboxProfile += - #include "sandbox-minimal.sb" - ; - - debug("Generated sandbox profile:"); - debug(sandboxProfile); - - /* The tmpDir in scope points at the temporary build directory for our derivation. Some packages try different mechanisms - to find temporary directories, so we want to open up a broader place for them to put their files, if needed. */ - Path globalTmpDir = canonPath(defaultTempDir(), true); - - /* They don't like trailing slashes on subpath directives */ - while (!globalTmpDir.empty() && globalTmpDir.back() == '/') - globalTmpDir.pop_back(); - - if (getEnv("_NIX_TEST_NO_SANDBOX") != "1") { - Strings sandboxArgs; - sandboxArgs.push_back("_GLOBAL_TMP_DIR"); - sandboxArgs.push_back(globalTmpDir); - if (drvOptions.allowLocalNetworking) { - sandboxArgs.push_back("_ALLOW_LOCAL_NETWORKING"); - sandboxArgs.push_back("1"); - } - char * sandbox_errbuf = nullptr; - if (sandbox_init_with_parameters(sandboxProfile.c_str(), 0, stringsToCharPtrs(sandboxArgs).data(), &sandbox_errbuf)) { - writeFull(STDERR_FILENO, fmt("failed to configure sandbox: %s\n", sandbox_errbuf ? sandbox_errbuf : "(null)")); - _exit(1); - } - } -#endif + setUser(); /* Indicate that we managed to set up the build environment. */ writeFull(STDERR_FILENO, std::string("\2\n")); sendException = false; - /* Execute the program. This should not return. */ + /* If this is a builtin builder, call it now. This should not return. */ if (drv.isBuiltin()) { try { logger = makeJSONLogger(getStandardError()); @@ -2367,7 +1397,7 @@ void DerivationBuilderImpl::runChild() } } - // Now builder is not builtin + /* It's not a builtin builder, so execute the program. */ Strings args; args.push_back(std::string(baseNameOf(drv.builder))); @@ -2375,31 +1405,11 @@ void DerivationBuilderImpl::runChild() for (auto & i : drv.args) args.push_back(rewriteStrings(i, inputRewrites)); -#ifdef __APPLE__ - posix_spawnattr_t attrp; + Strings envStrs; + for (auto & i : env) + envStrs.push_back(rewriteStrings(i.first + "=" + i.second, inputRewrites)); - if (posix_spawnattr_init(&attrp)) - throw SysError("failed to initialize builder"); - - if (posix_spawnattr_setflags(&attrp, POSIX_SPAWN_SETEXEC)) - throw SysError("failed to initialize builder"); - - if (drv.platform == "aarch64-darwin") { - // Unset kern.curproc_arch_affinity so we can escape Rosetta - int affinity = 0; - sysctlbyname("kern.curproc_arch_affinity", NULL, NULL, &affinity, sizeof(affinity)); - - cpu_type_t cpu = CPU_TYPE_ARM64; - posix_spawnattr_setbinpref_np(&attrp, 1, &cpu, NULL); - } else if (drv.platform == "x86_64-darwin") { - cpu_type_t cpu = CPU_TYPE_X86_64; - posix_spawnattr_setbinpref_np(&attrp, 1, &cpu, NULL); - } - - posix_spawn(NULL, drv.builder.c_str(), NULL, &attrp, stringsToCharPtrs(args).data(), stringsToCharPtrs(envStrs).data()); -#else - execve(drv.builder.c_str(), stringsToCharPtrs(args).data(), stringsToCharPtrs(envStrs).data()); -#endif + execBuilder(args, envStrs); throw SysError("executing '%1%'", drv.builder); @@ -2409,6 +1419,37 @@ void DerivationBuilderImpl::runChild() } } +void DerivationBuilderImpl::setUser() +{ + /* If we are running in `build-users' mode, then switch to the + user we allocated above. Make sure that we drop all root + privileges. Note that above we have closed all file + descriptors except std*, so that's safe. Also note that + setuid() when run as root sets the real, effective and + saved UIDs. */ + if (buildUser) { + /* Preserve supplementary groups of the build user, to allow + admins to specify groups such as "kvm". */ + auto gids = buildUser->getSupplementaryGIDs(); + if (setgroups(gids.size(), gids.data()) == -1) + throw SysError("cannot set supplementary groups of build user"); + + if (setgid(buildUser->getGID()) == -1 || + getgid() != buildUser->getGID() || + getegid() != buildUser->getGID()) + throw SysError("setgid failed"); + + if (setuid(buildUser->getUID()) == -1 || + getuid() != buildUser->getUID() || + geteuid() != buildUser->getUID()) + throw SysError("setuid failed"); + } +} + +void DerivationBuilderImpl::execBuilder(const Strings & args, const Strings & envStrs) +{ + execve(drv.builder.c_str(), stringsToCharPtrs(args).data(), stringsToCharPtrs(envStrs).data()); +} SingleDrvOutputs DerivationBuilderImpl::registerOutputs() { @@ -2431,14 +1472,6 @@ SingleDrvOutputs DerivationBuilderImpl::registerOutputs() for (auto & i : scratchOutputs) referenceablePaths.insert(i.second); for (auto & p : addedPaths) referenceablePaths.insert(p); - /* FIXME `needsHashRewrite` should probably be removed and we get to the - real reason why we aren't using the chroot dir */ - auto toRealPathChroot = [&](const Path & p) -> Path { - return useChroot && !needsHashRewrite() - ? chrootRootDir + p - : store.toRealPath(p); - }; - /* Check whether the output paths were created, and make all output paths read-only. Then get the references of each output (that we might need to register), so we can topologically sort them. For the ones @@ -2455,7 +1488,7 @@ SingleDrvOutputs DerivationBuilderImpl::registerOutputs() throw BuildError( "builder for '%s' has no scratch output for '%s'", store.printStorePath(drvPath), outputName); - auto actualPath = toRealPathChroot(store.printStorePath(*scratchOutput)); + auto actualPath = realPathInSandbox(store.printStorePath(*scratchOutput)); outputsToSort.insert(outputName); @@ -2564,7 +1597,7 @@ SingleDrvOutputs DerivationBuilderImpl::registerOutputs() auto output = get(drv.outputs, outputName); auto scratchPath = get(scratchOutputs, outputName); assert(output && scratchPath); - auto actualPath = toRealPathChroot(store.printStorePath(*scratchPath)); + auto actualPath = realPathInSandbox(store.printStorePath(*scratchPath)); auto finish = [&](StorePath finalStorePath) { /* Store the final path */ @@ -2847,7 +1880,7 @@ SingleDrvOutputs DerivationBuilderImpl::registerOutputs() } } - auto & localStore = getLocalStore(); + auto & localStore = getLocalStore(store); if (buildMode == bmCheck) { @@ -2914,7 +1947,7 @@ SingleDrvOutputs DerivationBuilderImpl::registerOutputs() also a source for non-determinism. */ if (delayedException) std::rethrow_exception(delayedException); - return miscMethods->assertPathValidity(); + return {}; } /* Apply output checks. */ @@ -2924,7 +1957,7 @@ SingleDrvOutputs DerivationBuilderImpl::registerOutputs() paths referenced by each of them. If there are cycles in the outputs, this will fail. */ { - auto & localStore = getLocalStore(); + auto & localStore = getLocalStore(store); ValidPathInfos infos2; for (auto & [outputName, newInfo] : infos) { @@ -3154,5 +2187,88 @@ StorePath DerivationBuilderImpl::makeFallbackPath(const StorePath & path) Hash(HashAlgorithm::SHA256), path.name()); } +} + +// FIXME: do this properly +#include "linux-derivation-builder.cc" +#include "darwin-derivation-builder.cc" + +namespace nix { + +std::unique_ptr makeDerivationBuilder( + Store & store, + std::unique_ptr miscMethods, + DerivationBuilderParams params) +{ + bool useSandbox = false; + + /* Are we doing a sandboxed build? */ + { + if (settings.sandboxMode == smEnabled) { + if (params.drvOptions.noChroot) + throw Error("derivation '%s' has '__noChroot' set, " + "but that's not allowed when 'sandbox' is 'true'", store.printStorePath(params.drvPath)); +#ifdef __APPLE__ + if (params.drvOptions.additionalSandboxProfile != "") + throw Error("derivation '%s' specifies a sandbox profile, " + "but this is only allowed when 'sandbox' is 'relaxed'", store.printStorePath(params.drvPath)); +#endif + useSandbox = true; + } + else if (settings.sandboxMode == smDisabled) + useSandbox = false; + else if (settings.sandboxMode == smRelaxed) + // FIXME: cache derivationType + useSandbox = params.drv.type().isSandboxed() && !params.drvOptions.noChroot; + } + + auto & localStore = getLocalStore(store); + if (localStore.storeDir != localStore.config->realStoreDir.get()) { + #ifdef __linux__ + useSandbox = true; + #else + throw Error("building using a diverted store is not supported on this platform"); + #endif + } + + #ifdef __linux__ + if (useSandbox && !mountAndPidNamespacesSupported()) { + if (!settings.sandboxFallback) + throw Error("this system does not support the kernel namespaces that are required for sandboxing; use '--no-sandbox' to disable sandboxing"); + debug("auto-disabling sandboxing because the prerequisite namespaces are not available"); + useSandbox = false; + } + + if (useSandbox) + return std::make_unique( + store, + std::move(miscMethods), + std::move(params)); + #endif + + if (!useSandbox && params.drvOptions.useUidRange(params.drv)) + throw Error("feature 'uid-range' is only supported in sandboxed builds"); + + #ifdef __APPLE__ + return std::make_unique( + store, + std::move(miscMethods), + std::move(params), + useSandbox); + #elif defined(__linux__) + return std::make_unique( + store, + std::move(miscMethods), + std::move(params)); + #else + if (useSandbox) + throw Error("sandboxing builds is not supported on this platform"); + + return std::make_unique( + store, + std::move(miscMethods), + std::move(params)); + #endif +} } diff --git a/src/libstore/unix/build/linux-derivation-builder.cc b/src/libstore/unix/build/linux-derivation-builder.cc new file mode 100644 index 000000000..b23c8003f --- /dev/null +++ b/src/libstore/unix/build/linux-derivation-builder.cc @@ -0,0 +1,883 @@ +#ifdef __linux__ + +# include "nix/store/personality.hh" +# include "nix/util/cgroup.hh" +# include "nix/util/linux-namespaces.hh" +# include "linux/fchmodat2-compat.hh" + +# include +# include +# include +# include +# include +# include +# include +# include + +# if HAVE_SECCOMP +# include +# endif + +# define pivot_root(new_root, put_old) (syscall(SYS_pivot_root, new_root, put_old)) + +namespace nix { + +static void setupSeccomp() +{ + if (!settings.filterSyscalls) + return; + +# if HAVE_SECCOMP + scmp_filter_ctx ctx; + + if (!(ctx = seccomp_init(SCMP_ACT_ALLOW))) + throw SysError("unable to initialize seccomp mode 2"); + + Finally cleanup([&]() { seccomp_release(ctx); }); + + constexpr std::string_view nativeSystem = NIX_LOCAL_SYSTEM; + + if (nativeSystem == "x86_64-linux" && seccomp_arch_add(ctx, SCMP_ARCH_X86) != 0) + throw SysError("unable to add 32-bit seccomp architecture"); + + if (nativeSystem == "x86_64-linux" && seccomp_arch_add(ctx, SCMP_ARCH_X32) != 0) + throw SysError("unable to add X32 seccomp architecture"); + + if (nativeSystem == "aarch64-linux" && seccomp_arch_add(ctx, SCMP_ARCH_ARM) != 0) + printError( + "unable to add ARM seccomp architecture; this may result in spurious build failures if running 32-bit ARM processes"); + + if (nativeSystem == "mips64-linux" && seccomp_arch_add(ctx, SCMP_ARCH_MIPS) != 0) + printError("unable to add mips seccomp architecture"); + + if (nativeSystem == "mips64-linux" && seccomp_arch_add(ctx, SCMP_ARCH_MIPS64N32) != 0) + printError("unable to add mips64-*abin32 seccomp architecture"); + + if (nativeSystem == "mips64el-linux" && seccomp_arch_add(ctx, SCMP_ARCH_MIPSEL) != 0) + printError("unable to add mipsel seccomp architecture"); + + if (nativeSystem == "mips64el-linux" && seccomp_arch_add(ctx, SCMP_ARCH_MIPSEL64N32) != 0) + printError("unable to add mips64el-*abin32 seccomp architecture"); + + /* Prevent builders from creating setuid/setgid binaries. */ + for (int perm : {S_ISUID, S_ISGID}) { + if (seccomp_rule_add( + ctx, + SCMP_ACT_ERRNO(EPERM), + SCMP_SYS(chmod), + 1, + SCMP_A1(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) + != 0) + throw SysError("unable to add seccomp rule"); + + if (seccomp_rule_add( + ctx, + SCMP_ACT_ERRNO(EPERM), + SCMP_SYS(fchmod), + 1, + SCMP_A1(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) + != 0) + throw SysError("unable to add seccomp rule"); + + if (seccomp_rule_add( + ctx, + SCMP_ACT_ERRNO(EPERM), + SCMP_SYS(fchmodat), + 1, + SCMP_A2(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) + != 0) + throw SysError("unable to add seccomp rule"); + + if (seccomp_rule_add( + ctx, + SCMP_ACT_ERRNO(EPERM), + NIX_SYSCALL_FCHMODAT2, + 1, + SCMP_A2(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) + != 0) + throw SysError("unable to add seccomp rule"); + } + + /* Prevent builders from using EAs or ACLs. Not all filesystems + support these, and they're not allowed in the Nix store because + they're not representable in the NAR serialisation. */ + if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(getxattr), 0) != 0 + || seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(lgetxattr), 0) != 0 + || seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(fgetxattr), 0) != 0 + || seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(setxattr), 0) != 0 + || seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(lsetxattr), 0) != 0 + || seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(fsetxattr), 0) != 0) + throw SysError("unable to add seccomp rule"); + + if (seccomp_attr_set(ctx, SCMP_FLTATR_CTL_NNP, settings.allowNewPrivileges ? 0 : 1) != 0) + throw SysError("unable to set 'no new privileges' seccomp attribute"); + + if (seccomp_load(ctx) != 0) + throw SysError("unable to load seccomp BPF program"); +# else + throw Error( + "seccomp is not supported on this platform; " + "you can bypass this error by setting the option 'filter-syscalls' to false, but note that untrusted builds can then create setuid binaries!"); +# endif +} + +static void doBind(const Path & source, const Path & target, bool optional = false) +{ + debug("bind mounting '%1%' to '%2%'", source, target); + + auto bindMount = [&]() { + if (mount(source.c_str(), target.c_str(), "", MS_BIND | MS_REC, 0) == -1) + throw SysError("bind mount from '%1%' to '%2%' failed", source, target); + }; + + auto maybeSt = maybeLstat(source); + if (!maybeSt) { + if (optional) + return; + else + throw SysError("getting attributes of path '%1%'", source); + } + auto st = *maybeSt; + + if (S_ISDIR(st.st_mode)) { + createDirs(target); + bindMount(); + } else if (S_ISLNK(st.st_mode)) { + // Symlinks can (apparently) not be bind-mounted, so just copy it + createDirs(dirOf(target)); + copyFile(std::filesystem::path(source), std::filesystem::path(target), false); + } else { + createDirs(dirOf(target)); + writeFile(target, ""); + bindMount(); + } +} + +struct LinuxDerivationBuilder : DerivationBuilderImpl +{ + using DerivationBuilderImpl::DerivationBuilderImpl; + + void enterChroot() override + { + setupSeccomp(); + + linux::setPersonality(drv.platform); + } +}; + +struct ChrootLinuxDerivationBuilder : LinuxDerivationBuilder +{ + /** + * Pipe for synchronising updates to the builder namespaces. + */ + Pipe userNamespaceSync; + + /** + * The mount namespace and user namespace of the builder, used to add additional + * paths to the sandbox as a result of recursive Nix calls. + */ + AutoCloseFD sandboxMountNamespace; + AutoCloseFD sandboxUserNamespace; + + /** + * On Linux, whether we're doing the build in its own user + * namespace. + */ + bool usingUserNamespace = true; + + /** + * The root of the chroot environment. + */ + Path chrootRootDir; + + /** + * RAII object to delete the chroot directory. + */ + std::shared_ptr autoDelChroot; + + PathsInChroot pathsInChroot; + + /** + * The cgroup of the builder, if any. + */ + std::optional cgroup; + + using LinuxDerivationBuilder::LinuxDerivationBuilder; + + void deleteTmpDir(bool force) override + { + autoDelChroot.reset(); /* this runs the destructor */ + + DerivationBuilderImpl::deleteTmpDir(force); + } + + uid_t sandboxUid() + { + return usingUserNamespace ? (!buildUser || buildUser->getUIDCount() == 1 ? 1000 : 0) : buildUser->getUID(); + } + + gid_t sandboxGid() + { + return usingUserNamespace ? (!buildUser || buildUser->getUIDCount() == 1 ? 100 : 0) : buildUser->getGID(); + } + + bool needsHashRewrite() override + { + return false; + } + + std::unique_ptr getBuildUser() override + { + return acquireUserLock(drvOptions.useUidRange(drv) ? 65536 : 1, true); + } + + void setBuildTmpDir() override + { + /* If sandboxing is enabled, put the actual TMPDIR underneath + an inaccessible root-owned directory, to prevent outside + access. + + On macOS, we don't use an actual chroot, so this isn't + possible. Any mitigation along these lines would have to be + done directly in the sandbox profile. */ + tmpDir = topTmpDir + "/build"; + createDir(tmpDir, 0700); + } + + Path tmpDirInSandbox() override + { + /* In a sandbox, for determinism, always use the same temporary + directory. */ + return settings.sandboxBuildDir; + } + + void prepareUser() override + { + if ((buildUser && buildUser->getUIDCount() != 1) || settings.useCgroups) { + experimentalFeatureSettings.require(Xp::Cgroups); + + /* If we're running from the daemon, then this will return the + root cgroup of the service. Otherwise, it will return the + current cgroup. */ + auto rootCgroup = getRootCgroup(); + auto cgroupFS = getCgroupFS(); + if (!cgroupFS) + throw Error("cannot determine the cgroups file system"); + auto rootCgroupPath = canonPath(*cgroupFS + "/" + rootCgroup); + if (!pathExists(rootCgroupPath)) + throw Error("expected cgroup directory '%s'", rootCgroupPath); + + static std::atomic counter{0}; + + cgroup = buildUser ? fmt("%s/nix-build-uid-%d", rootCgroupPath, buildUser->getUID()) + : fmt("%s/nix-build-pid-%d-%d", rootCgroupPath, getpid(), counter++); + + debug("using cgroup '%s'", *cgroup); + + /* When using a build user, record the cgroup we used for that + user so that if we got interrupted previously, we can kill + any left-over cgroup first. */ + if (buildUser) { + auto cgroupsDir = settings.nixStateDir + "/cgroups"; + createDirs(cgroupsDir); + + auto cgroupFile = fmt("%s/%d", cgroupsDir, buildUser->getUID()); + + if (pathExists(cgroupFile)) { + auto prevCgroup = readFile(cgroupFile); + destroyCgroup(prevCgroup); + } + + writeFile(cgroupFile, *cgroup); + } + } + + // Kill any processes left in the cgroup or build user. + DerivationBuilderImpl::prepareUser(); + } + + void prepareSandbox() override + { + /* Create a temporary directory in which we set up the chroot + environment using bind-mounts. We put it in the Nix store + so that the build outputs can be moved efficiently from the + chroot to their final location. */ + auto chrootParentDir = store.Store::toRealPath(drvPath) + ".chroot"; + deletePath(chrootParentDir); + + /* Clean up the chroot directory automatically. */ + autoDelChroot = std::make_shared(chrootParentDir); + + printMsg(lvlChatty, "setting up chroot environment in '%1%'", chrootParentDir); + + if (mkdir(chrootParentDir.c_str(), 0700) == -1) + throw SysError("cannot create '%s'", chrootRootDir); + + chrootRootDir = chrootParentDir + "/root"; + + if (mkdir(chrootRootDir.c_str(), buildUser && buildUser->getUIDCount() != 1 ? 0755 : 0750) == -1) + throw SysError("cannot create '%1%'", chrootRootDir); + + if (buildUser + && chown( + chrootRootDir.c_str(), buildUser->getUIDCount() != 1 ? buildUser->getUID() : 0, buildUser->getGID()) + == -1) + throw SysError("cannot change ownership of '%1%'", chrootRootDir); + + /* Create a writable /tmp in the chroot. Many builders need + this. (Of course they should really respect $TMPDIR + instead.) */ + Path chrootTmpDir = chrootRootDir + "/tmp"; + createDirs(chrootTmpDir); + chmod_(chrootTmpDir, 01777); + + /* Create a /etc/passwd with entries for the build user and the + nobody account. The latter is kind of a hack to support + Samba-in-QEMU. */ + createDirs(chrootRootDir + "/etc"); + if (drvOptions.useUidRange(drv)) + chownToBuilder(chrootRootDir + "/etc"); + + if (drvOptions.useUidRange(drv) && (!buildUser || buildUser->getUIDCount() < 65536)) + throw Error("feature 'uid-range' requires the setting '%s' to be enabled", settings.autoAllocateUids.name); + + /* Declare the build user's group so that programs get a consistent + view of the system (e.g., "id -gn"). */ + writeFile( + chrootRootDir + "/etc/group", + fmt("root:x:0:\n" + "nixbld:!:%1%:\n" + "nogroup:x:65534:\n", + sandboxGid())); + + /* Create /etc/hosts with localhost entry. */ + if (derivationType.isSandboxed()) + writeFile(chrootRootDir + "/etc/hosts", "127.0.0.1 localhost\n::1 localhost\n"); + + /* Make the closure of the inputs available in the chroot, + rather than the whole Nix store. This prevents any access + to undeclared dependencies. Directories are bind-mounted, + while other inputs are hard-linked (since only directories + can be bind-mounted). !!! As an extra security + precaution, make the fake Nix store only writable by the + build user. */ + Path chrootStoreDir = chrootRootDir + store.storeDir; + createDirs(chrootStoreDir); + chmod_(chrootStoreDir, 01775); + + if (buildUser && chown(chrootStoreDir.c_str(), 0, buildUser->getGID()) == -1) + throw SysError("cannot change ownership of '%1%'", chrootStoreDir); + + pathsInChroot = getPathsInSandbox(); + + for (auto & i : inputPaths) { + auto p = store.printStorePath(i); + pathsInChroot.insert_or_assign(p, store.toRealPath(p)); + } + + /* If we're repairing, checking or rebuilding part of a + multiple-outputs derivation, it's possible that we're + rebuilding a path that is in settings.sandbox-paths + (typically the dependencies of /bin/sh). Throw them + out. */ + for (auto & i : drv.outputsAndOptPaths(store)) { + /* If the name isn't known a priori (i.e. floating + content-addressing derivation), the temporary location we use + should be fresh. Freshness means it is impossible that the path + is already in the sandbox, so we don't need to worry about + removing it. */ + if (i.second.second) + pathsInChroot.erase(store.printStorePath(*i.second.second)); + } + + if (cgroup) { + if (mkdir(cgroup->c_str(), 0755) != 0) + throw SysError("creating cgroup '%s'", *cgroup); + chownToBuilder(*cgroup); + chownToBuilder(*cgroup + "/cgroup.procs"); + chownToBuilder(*cgroup + "/cgroup.threads"); + // chownToBuilder(*cgroup + "/cgroup.subtree_control"); + } + } + + Strings getPreBuildHookArgs() override + { + assert(!chrootRootDir.empty()); + return Strings({store.printStorePath(drvPath), chrootRootDir}); + } + + Path realPathInSandbox(const Path & p) override + { + // FIXME: why the needsHashRewrite() conditional? + return !needsHashRewrite() ? chrootRootDir + p : store.toRealPath(p); + } + + void startChild() override + { + /* Set up private namespaces for the build: + + - The PID namespace causes the build to start as PID 1. + Processes outside of the chroot are not visible to those + on the inside, but processes inside the chroot are + visible from the outside (though with different PIDs). + + - The private mount namespace ensures that all the bind + mounts we do will only show up in this process and its + children, and will disappear automatically when we're + done. + + - The private network namespace ensures that the builder + cannot talk to the outside world (or vice versa). It + only has a private loopback interface. (Fixed-output + derivations are not run in a private network namespace + to allow functions like fetchurl to work.) + + - The IPC namespace prevents the builder from communicating + with outside processes using SysV IPC mechanisms (shared + memory, message queues, semaphores). It also ensures + that all IPC objects are destroyed when the builder + exits. + + - The UTS namespace ensures that builders see a hostname of + localhost rather than the actual hostname. + + We use a helper process to do the clone() to work around + clone() being broken in multi-threaded programs due to + at-fork handlers not being run. Note that we use + CLONE_PARENT to ensure that the real builder is parented to + us. + */ + + userNamespaceSync.create(); + + usingUserNamespace = userNamespacesSupported(); + + Pipe sendPid; + sendPid.create(); + + Pid helper = startProcess([&]() { + sendPid.readSide.close(); + + /* We need to open the slave early, before + CLONE_NEWUSER. Otherwise we get EPERM when running as + root. */ + openSlave(); + + try { + /* Drop additional groups here because we can't do it + after we've created the new user namespace. */ + if (setgroups(0, 0) == -1) { + if (errno != EPERM) + throw SysError("setgroups failed"); + if (settings.requireDropSupplementaryGroups) + throw Error( + "setgroups failed. Set the require-drop-supplementary-groups option to false to skip this step."); + } + + ProcessOptions options; + options.cloneFlags = CLONE_NEWPID | CLONE_NEWNS | CLONE_NEWIPC | CLONE_NEWUTS | CLONE_PARENT | SIGCHLD; + if (derivationType.isSandboxed()) + options.cloneFlags |= CLONE_NEWNET; + if (usingUserNamespace) + options.cloneFlags |= CLONE_NEWUSER; + + pid_t child = startProcess([&]() { runChild(); }, options); + + writeFull(sendPid.writeSide.get(), fmt("%d\n", child)); + _exit(0); + } catch (...) { + handleChildException(true); + _exit(1); + } + }); + + sendPid.writeSide.close(); + + if (helper.wait() != 0) { + processSandboxSetupMessages(); + // Only reached if the child process didn't send an exception. + throw Error("unable to start build process"); + } + + userNamespaceSync.readSide = -1; + + /* Close the write side to prevent runChild() from hanging + reading from this. */ + Finally cleanup([&]() { userNamespaceSync.writeSide = -1; }); + + auto ss = tokenizeString>(readLine(sendPid.readSide.get())); + assert(ss.size() == 1); + pid = string2Int(ss[0]).value(); + + if (usingUserNamespace) { + /* Set the UID/GID mapping of the builder's user namespace + such that the sandbox user maps to the build user, or to + the calling user (if build users are disabled). */ + uid_t hostUid = buildUser ? buildUser->getUID() : getuid(); + uid_t hostGid = buildUser ? buildUser->getGID() : getgid(); + uid_t nrIds = buildUser ? buildUser->getUIDCount() : 1; + + writeFile("/proc/" + std::to_string(pid) + "/uid_map", fmt("%d %d %d", sandboxUid(), hostUid, nrIds)); + + if (!buildUser || buildUser->getUIDCount() == 1) + writeFile("/proc/" + std::to_string(pid) + "/setgroups", "deny"); + + writeFile("/proc/" + std::to_string(pid) + "/gid_map", fmt("%d %d %d", sandboxGid(), hostGid, nrIds)); + } else { + debug("note: not using a user namespace"); + if (!buildUser) + throw Error( + "cannot perform a sandboxed build because user namespaces are not enabled; check /proc/sys/user/max_user_namespaces"); + } + + /* Now that we now the sandbox uid, we can write + /etc/passwd. */ + writeFile( + chrootRootDir + "/etc/passwd", + fmt("root:x:0:0:Nix build user:%3%:/noshell\n" + "nixbld:x:%1%:%2%:Nix build user:%3%:/noshell\n" + "nobody:x:65534:65534:Nobody:/:/noshell\n", + sandboxUid(), + sandboxGid(), + settings.sandboxBuildDir)); + + /* Save the mount- and user namespace of the child. We have to do this + *before* the child does a chroot. */ + sandboxMountNamespace = open(fmt("/proc/%d/ns/mnt", (pid_t) pid).c_str(), O_RDONLY); + if (sandboxMountNamespace.get() == -1) + throw SysError("getting sandbox mount namespace"); + + if (usingUserNamespace) { + sandboxUserNamespace = open(fmt("/proc/%d/ns/user", (pid_t) pid).c_str(), O_RDONLY); + if (sandboxUserNamespace.get() == -1) + throw SysError("getting sandbox user namespace"); + } + + /* Move the child into its own cgroup. */ + if (cgroup) + writeFile(*cgroup + "/cgroup.procs", fmt("%d", (pid_t) pid)); + + /* Signal the builder that we've updated its user namespace. */ + writeFull(userNamespaceSync.writeSide.get(), "1"); + } + + void enterChroot() override + { + userNamespaceSync.writeSide = -1; + + if (drainFD(userNamespaceSync.readSide.get()) != "1") + throw Error("user namespace initialisation failed"); + + userNamespaceSync.readSide = -1; + + if (derivationType.isSandboxed()) { + + /* Initialise the loopback interface. */ + AutoCloseFD fd(socket(PF_INET, SOCK_DGRAM, IPPROTO_IP)); + if (!fd) + throw SysError("cannot open IP socket"); + + struct ifreq ifr; + strcpy(ifr.ifr_name, "lo"); + ifr.ifr_flags = IFF_UP | IFF_LOOPBACK | IFF_RUNNING; + if (ioctl(fd.get(), SIOCSIFFLAGS, &ifr) == -1) + throw SysError("cannot set loopback interface flags"); + } + + /* Set the hostname etc. to fixed values. */ + char hostname[] = "localhost"; + if (sethostname(hostname, sizeof(hostname)) == -1) + throw SysError("cannot set host name"); + char domainname[] = "(none)"; // kernel default + if (setdomainname(domainname, sizeof(domainname)) == -1) + throw SysError("cannot set domain name"); + + /* Make all filesystems private. This is necessary + because subtrees may have been mounted as "shared" + (MS_SHARED). (Systemd does this, for instance.) Even + though we have a private mount namespace, mounting + filesystems on top of a shared subtree still propagates + outside of the namespace. Making a subtree private is + local to the namespace, though, so setting MS_PRIVATE + does not affect the outside world. */ + if (mount(0, "/", 0, MS_PRIVATE | MS_REC, 0) == -1) + throw SysError("unable to make '/' private"); + + /* Bind-mount chroot directory to itself, to treat it as a + different filesystem from /, as needed for pivot_root. */ + if (mount(chrootRootDir.c_str(), chrootRootDir.c_str(), 0, MS_BIND, 0) == -1) + throw SysError("unable to bind mount '%1%'", chrootRootDir); + + /* Bind-mount the sandbox's Nix store onto itself so that + we can mark it as a "shared" subtree, allowing bind + mounts made in *this* mount namespace to be propagated + into the child namespace created by the + unshare(CLONE_NEWNS) call below. + + Marking chrootRootDir as MS_SHARED causes pivot_root() + to fail with EINVAL. Don't know why. */ + Path chrootStoreDir = chrootRootDir + store.storeDir; + + if (mount(chrootStoreDir.c_str(), chrootStoreDir.c_str(), 0, MS_BIND, 0) == -1) + throw SysError("unable to bind mount the Nix store", chrootStoreDir); + + if (mount(0, chrootStoreDir.c_str(), 0, MS_SHARED, 0) == -1) + throw SysError("unable to make '%s' shared", chrootStoreDir); + + /* Set up a nearly empty /dev, unless the user asked to + bind-mount the host /dev. */ + Strings ss; + if (pathsInChroot.find("/dev") == pathsInChroot.end()) { + createDirs(chrootRootDir + "/dev/shm"); + createDirs(chrootRootDir + "/dev/pts"); + ss.push_back("/dev/full"); + if (store.config.systemFeatures.get().count("kvm") && pathExists("/dev/kvm")) + ss.push_back("/dev/kvm"); + ss.push_back("/dev/null"); + ss.push_back("/dev/random"); + ss.push_back("/dev/tty"); + ss.push_back("/dev/urandom"); + ss.push_back("/dev/zero"); + createSymlink("/proc/self/fd", chrootRootDir + "/dev/fd"); + createSymlink("/proc/self/fd/0", chrootRootDir + "/dev/stdin"); + createSymlink("/proc/self/fd/1", chrootRootDir + "/dev/stdout"); + createSymlink("/proc/self/fd/2", chrootRootDir + "/dev/stderr"); + } + + /* Fixed-output derivations typically need to access the + network, so give them access to /etc/resolv.conf and so + on. */ + if (!derivationType.isSandboxed()) { + // Only use nss functions to resolve hosts and + // services. Don’t use it for anything else that may + // be configured for this system. This limits the + // potential impurities introduced in fixed-outputs. + writeFile(chrootRootDir + "/etc/nsswitch.conf", "hosts: files dns\nservices: files\n"); + + /* N.B. it is realistic that these paths might not exist. It + happens when testing Nix building fixed-output derivations + within a pure derivation. */ + for (auto & path : {"/etc/resolv.conf", "/etc/services", "/etc/hosts"}) + if (pathExists(path)) + ss.push_back(path); + + if (settings.caFile != "") { + Path caFile = settings.caFile; + if (pathExists(caFile)) + pathsInChroot.try_emplace("/etc/ssl/certs/ca-certificates.crt", canonPath(caFile, true), true); + } + } + + for (auto & i : ss) { + // For backwards-compatibility, resolve all the symlinks in the + // chroot paths. + auto canonicalPath = canonPath(i, true); + pathsInChroot.emplace(i, canonicalPath); + } + + /* Bind-mount all the directories from the "host" + filesystem that we want in the chroot + environment. */ + for (auto & i : pathsInChroot) { + if (i.second.source == "/proc") + continue; // backwards compatibility + +# if HAVE_EMBEDDED_SANDBOX_SHELL + if (i.second.source == "__embedded_sandbox_shell__") { + static unsigned char sh[] = { +# include "embedded-sandbox-shell.gen.hh" + }; + auto dst = chrootRootDir + i.first; + createDirs(dirOf(dst)); + writeFile(dst, std::string_view((const char *) sh, sizeof(sh))); + chmod_(dst, 0555); + } else +# endif + { + doBind(i.second.source, chrootRootDir + i.first, i.second.optional); + } + } + + /* Bind a new instance of procfs on /proc. */ + createDirs(chrootRootDir + "/proc"); + if (mount("none", (chrootRootDir + "/proc").c_str(), "proc", 0, 0) == -1) + throw SysError("mounting /proc"); + + /* Mount sysfs on /sys. */ + if (buildUser && buildUser->getUIDCount() != 1) { + createDirs(chrootRootDir + "/sys"); + if (mount("none", (chrootRootDir + "/sys").c_str(), "sysfs", 0, 0) == -1) + throw SysError("mounting /sys"); + } + + /* Mount a new tmpfs on /dev/shm to ensure that whatever + the builder puts in /dev/shm is cleaned up automatically. */ + if (pathExists("/dev/shm") + && mount( + "none", + (chrootRootDir + "/dev/shm").c_str(), + "tmpfs", + 0, + fmt("size=%s", settings.sandboxShmSize).c_str()) + == -1) + throw SysError("mounting /dev/shm"); + + /* Mount a new devpts on /dev/pts. Note that this + requires the kernel to be compiled with + CONFIG_DEVPTS_MULTIPLE_INSTANCES=y (which is the case + if /dev/ptx/ptmx exists). */ + if (pathExists("/dev/pts/ptmx") && !pathExists(chrootRootDir + "/dev/ptmx") + && !pathsInChroot.count("/dev/pts")) { + if (mount("none", (chrootRootDir + "/dev/pts").c_str(), "devpts", 0, "newinstance,mode=0620") == 0) { + createSymlink("/dev/pts/ptmx", chrootRootDir + "/dev/ptmx"); + + /* Make sure /dev/pts/ptmx is world-writable. With some + Linux versions, it is created with permissions 0. */ + chmod_(chrootRootDir + "/dev/pts/ptmx", 0666); + } else { + if (errno != EINVAL) + throw SysError("mounting /dev/pts"); + doBind("/dev/pts", chrootRootDir + "/dev/pts"); + doBind("/dev/ptmx", chrootRootDir + "/dev/ptmx"); + } + } + + /* Make /etc unwritable */ + if (!drvOptions.useUidRange(drv)) + chmod_(chrootRootDir + "/etc", 0555); + + /* Unshare this mount namespace. This is necessary because + pivot_root() below changes the root of the mount + namespace. This means that the call to setns() in + addDependency() would hide the host's filesystem, + making it impossible to bind-mount paths from the host + Nix store into the sandbox. Therefore, we save the + pre-pivot_root namespace in + sandboxMountNamespace. Since we made /nix/store a + shared subtree above, this allows addDependency() to + make paths appear in the sandbox. */ + if (unshare(CLONE_NEWNS) == -1) + throw SysError("unsharing mount namespace"); + + /* Unshare the cgroup namespace. This means + /proc/self/cgroup will show the child's cgroup as '/' + rather than whatever it is in the parent. */ + if (cgroup && unshare(CLONE_NEWCGROUP) == -1) + throw SysError("unsharing cgroup namespace"); + + /* Do the chroot(). */ + if (chdir(chrootRootDir.c_str()) == -1) + throw SysError("cannot change directory to '%1%'", chrootRootDir); + + if (mkdir("real-root", 0500) == -1) + throw SysError("cannot create real-root directory"); + + if (pivot_root(".", "real-root") == -1) + throw SysError("cannot pivot old root directory onto '%1%'", (chrootRootDir + "/real-root")); + + if (chroot(".") == -1) + throw SysError("cannot change root directory to '%1%'", chrootRootDir); + + if (umount2("real-root", MNT_DETACH) == -1) + throw SysError("cannot unmount real root filesystem"); + + if (rmdir("real-root") == -1) + throw SysError("cannot remove real-root directory"); + + LinuxDerivationBuilder::enterChroot(); + } + + void setUser() override + { + /* Switch to the sandbox uid/gid in the user namespace, + which corresponds to the build user or calling user in + the parent namespace. */ + if (setgid(sandboxGid()) == -1) + throw SysError("setgid failed"); + if (setuid(sandboxUid()) == -1) + throw SysError("setuid failed"); + } + + std::variant, SingleDrvOutputs> unprepareBuild() override + { + sandboxMountNamespace = -1; + sandboxUserNamespace = -1; + + return DerivationBuilderImpl::unprepareBuild(); + } + + void killSandbox(bool getStats) override + { + if (cgroup) { + auto stats = destroyCgroup(*cgroup); + if (getStats) { + buildResult.cpuUser = stats.cpuUser; + buildResult.cpuSystem = stats.cpuSystem; + } + return; + } + + DerivationBuilderImpl::killSandbox(getStats); + } + + void cleanupBuild() override + { + DerivationBuilderImpl::cleanupBuild(); + + /* Move paths out of the chroot for easier debugging of + build failures. */ + if (buildMode == bmNormal) + for (auto & [_, status] : initialOutputs) { + if (!status.known) + continue; + if (buildMode != bmCheck && status.known->isValid()) + continue; + auto p = store.toRealPath(status.known->path); + if (pathExists(chrootRootDir + p)) + std::filesystem::rename((chrootRootDir + p), p); + } + } + + void addDependency(const StorePath & path) override + { + if (isAllowed(path)) + return; + + addedPaths.insert(path); + + debug("materialising '%s' in the sandbox", store.printStorePath(path)); + + Path source = store.Store::toRealPath(path); + Path target = chrootRootDir + store.printStorePath(path); + + if (pathExists(target)) { + // There is a similar debug message in doBind, so only run it in this block to not have double messages. + debug("bind-mounting %s -> %s", target, source); + throw Error("store path '%s' already exists in the sandbox", store.printStorePath(path)); + } + + /* Bind-mount the path into the sandbox. This requires + entering its mount namespace, which is not possible + in multithreaded programs. So we do this in a + child process.*/ + Pid child(startProcess([&]() { + if (usingUserNamespace && (setns(sandboxUserNamespace.get(), 0) == -1)) + throw SysError("entering sandbox user namespace"); + + if (setns(sandboxMountNamespace.get(), 0) == -1) + throw SysError("entering sandbox mount namespace"); + + doBind(source, target); + + _exit(0); + })); + + int status = child.wait(); + if (status != 0) + throw Error("could not add path '%s' to sandbox", store.printStorePath(path)); + } +}; + +} + +#endif diff --git a/src/libstore/unix/include/nix/store/build/derivation-builder.hh b/src/libstore/unix/include/nix/store/build/derivation-builder.hh index 81a574fd0..2dddfdff8 100644 --- a/src/libstore/unix/include/nix/store/build/derivation-builder.hh +++ b/src/libstore/unix/include/nix/store/build/derivation-builder.hh @@ -104,14 +104,6 @@ struct DerivationBuilderCallbacks */ virtual void closeLogFile() = 0; - /** - * Aborts if any output is not valid or corrupt, and otherwise - * returns a 'SingleDrvOutputs' structure containing all outputs. - * - * @todo Probably should just be in `DerivationGoal`. - */ - virtual SingleDrvOutputs assertPathValidity() = 0; - virtual void appendLogTailErrorMsg(std::string & msg) = 0; /** @@ -145,11 +137,6 @@ struct DerivationBuilderCallbacks */ struct DerivationBuilder : RestrictionContext { - /** - * User selected for running the builder. - */ - std::unique_ptr buildUser; - /** * The process ID of the builder. */ diff --git a/src/libstore/unix/user-lock.cc b/src/libstore/unix/user-lock.cc index 2bee277f9..6a07cb7cc 100644 --- a/src/libstore/unix/user-lock.cc +++ b/src/libstore/unix/user-lock.cc @@ -7,6 +7,7 @@ #include "nix/store/globals.hh" #include "nix/store/pathlocks.hh" #include "nix/util/users.hh" +#include "nix/util/logging.hh" namespace nix { @@ -196,7 +197,7 @@ bool useBuildUsers() #ifdef __linux__ static bool b = (settings.buildUsersGroup != "" || settings.autoAllocateUids) && isRootUser(); return b; - #elif defined(__APPLE__) + #elif defined(__APPLE__) && defined(__FreeBSD__) static bool b = settings.buildUsersGroup != "" && isRootUser(); return b; #else diff --git a/src/libstore/windows/pathlocks.cc b/src/libstore/windows/pathlocks.cc index 0ba75853b..92a7cbcf9 100644 --- a/src/libstore/windows/pathlocks.cc +++ b/src/libstore/windows/pathlocks.cc @@ -127,7 +127,7 @@ bool PathLocks::lockPaths(const PathSet & paths, const std::string & waitMsg, bo } } - debug("lock aquired on '%1%'", lockPath); + debug("lock acquired on '%1%'", lockPath); struct _stat st; if (_fstat(fromDescriptorReadOnly(fd.get()), &st) == -1) diff --git a/src/libutil-tests/logging.cc b/src/libutil-tests/logging.cc index 494e9ce4c..5c9fcfe8f 100644 --- a/src/libutil-tests/logging.cc +++ b/src/libutil-tests/logging.cc @@ -19,7 +19,7 @@ namespace nix { const char *one_liner = "this is the other problem line of code"; - TEST(logEI, catpuresBasicProperties) { + TEST(logEI, capturesBasicProperties) { MakeError(TestError, Error); ErrorInfo::programName = std::optional("error-unit-test"); diff --git a/src/libutil-tests/meson.build b/src/libutil-tests/meson.build index f2552550d..b3776e094 100644 --- a/src/libutil-tests/meson.build +++ b/src/libutil-tests/meson.build @@ -65,6 +65,7 @@ sources = files( 'position.cc', 'processes.cc', 'references.cc', + 'sort.cc', 'spawn.cc', 'strings.cc', 'suggestions.cc', diff --git a/src/libutil-tests/sort.cc b/src/libutil-tests/sort.cc new file mode 100644 index 000000000..8eee961c8 --- /dev/null +++ b/src/libutil-tests/sort.cc @@ -0,0 +1,274 @@ +#include +#include +#include "nix/util/sort.hh" + +#include +#include +#include +#include + +namespace nix { + +struct MonotonicSubranges : public ::testing::Test +{ + std::vector empty_; + std::vector basic_ = {1, 0, -1, -100, 10, 10, 20, 40, 5, 5, 20, 10, 10, 1, -5}; +}; + +TEST_F(MonotonicSubranges, empty) +{ + ASSERT_EQ(weaklyIncreasingPrefix(empty_.begin(), empty_.end()), empty_.begin()); + ASSERT_EQ(weaklyIncreasingSuffix(empty_.begin(), empty_.end()), empty_.begin()); + ASSERT_EQ(strictlyDecreasingPrefix(empty_.begin(), empty_.end()), empty_.begin()); + ASSERT_EQ(strictlyDecreasingSuffix(empty_.begin(), empty_.end()), empty_.begin()); +} + +TEST_F(MonotonicSubranges, basic) +{ + ASSERT_EQ(strictlyDecreasingPrefix(basic_.begin(), basic_.end()), basic_.begin() + 4); + ASSERT_EQ(strictlyDecreasingSuffix(basic_.begin(), basic_.end()), basic_.begin() + 12); + std::reverse(basic_.begin(), basic_.end()); + ASSERT_EQ(weaklyIncreasingPrefix(basic_.begin(), basic_.end()), basic_.begin() + 5); + ASSERT_EQ(weaklyIncreasingSuffix(basic_.begin(), basic_.end()), basic_.begin() + 11); +} + +template +class SortTestPermutations : public ::testing::Test +{ + std::vector initialData = {std::numeric_limits::max(), std::numeric_limits::min(), 0, 0, 42, 126, 36}; + std::vector vectorData; + std::list listData; + +public: + std::vector scratchVector; + std::list scratchList; + std::vector empty; + + void SetUp() override + { + vectorData = initialData; + std::sort(vectorData.begin(), vectorData.end()); + listData = std::list(vectorData.begin(), vectorData.end()); + } + + bool nextPermutation() + { + std::next_permutation(vectorData.begin(), vectorData.end()); + std::next_permutation(listData.begin(), listData.end()); + scratchList = listData; + scratchVector = vectorData; + return vectorData == initialData; + } +}; + +using SortPermutationsTypes = ::testing::Types; + +TYPED_TEST_SUITE(SortTestPermutations, SortPermutationsTypes); + +TYPED_TEST(SortTestPermutations, insertionsort) +{ + while (!this->nextPermutation()) { + auto & list = this->scratchList; + insertionsort(list.begin(), list.end()); + ASSERT_TRUE(std::is_sorted(list.begin(), list.end())); + auto & vector = this->scratchVector; + insertionsort(vector.begin(), vector.end()); + ASSERT_TRUE(std::is_sorted(vector.begin(), vector.end())); + } +} + +TYPED_TEST(SortTestPermutations, peeksort) +{ + while (!this->nextPermutation()) { + auto & vector = this->scratchVector; + peeksort(vector.begin(), vector.end()); + ASSERT_TRUE(std::is_sorted(vector.begin(), vector.end())); + } +} + +TEST(InsertionSort, empty) +{ + std::vector empty; + insertionsort(empty.begin(), empty.end()); +} + +struct RandomPeekSort : public ::testing::TestWithParam< + std::tuple> +{ + using ValueType = int; + std::vector data_; + std::mt19937 urng_; + std::uniform_int_distribution distribution_; + + void SetUp() override + { + auto [maxSize, min, max, iterations] = GetParam(); + urng_ = std::mt19937(GTEST_FLAG_GET(random_seed)); + distribution_ = std::uniform_int_distribution(min, max); + } + + auto regenerate() + { + auto [maxSize, min, max, iterations] = GetParam(); + std::size_t dataSize = std::uniform_int_distribution(0, maxSize)(urng_); + data_.resize(dataSize); + std::generate(data_.begin(), data_.end(), [&]() { return distribution_(urng_); }); + } +}; + +TEST_P(RandomPeekSort, defaultComparator) +{ + auto [maxSize, min, max, iterations] = GetParam(); + + for (std::size_t i = 0; i < iterations; ++i) { + regenerate(); + peeksort(data_.begin(), data_.end()); + ASSERT_TRUE(std::is_sorted(data_.begin(), data_.end())); + /* Sorting is idempotent */ + peeksort(data_.begin(), data_.end()); + ASSERT_TRUE(std::is_sorted(data_.begin(), data_.end())); + } +} + +TEST_P(RandomPeekSort, greater) +{ + auto [maxSize, min, max, iterations] = GetParam(); + + for (std::size_t i = 0; i < iterations; ++i) { + regenerate(); + peeksort(data_.begin(), data_.end(), std::greater{}); + ASSERT_TRUE(std::is_sorted(data_.begin(), data_.end(), std::greater{})); + /* Sorting is idempotent */ + peeksort(data_.begin(), data_.end(), std::greater{}); + ASSERT_TRUE(std::is_sorted(data_.begin(), data_.end(), std::greater{})); + } +} + +TEST_P(RandomPeekSort, brokenComparator) +{ + auto [maxSize, min, max, iterations] = GetParam(); + + /* This is a pretty nice way of modeling a worst-case scenario for a broken comparator. + If the sorting algorithm doesn't break in such case, then surely all deterministic + predicates won't break it. */ + auto comp = [&]([[maybe_unused]] const auto & lhs, [[maybe_unused]] const auto & rhs) -> bool { + return std::uniform_int_distribution(0, 1)(urng_); + }; + + for (std::size_t i = 0; i < iterations; ++i) { + regenerate(); + auto originalData = data_; + peeksort(data_.begin(), data_.end(), comp); + /* Check that the output is just a reordering of the input. This is the + contract of the implementation in regard to comparators that don't + define a strict weak order. */ + std::sort(data_.begin(), data_.end()); + std::sort(originalData.begin(), originalData.end()); + ASSERT_EQ(originalData, data_); + } +} + +TEST_P(RandomPeekSort, stability) +{ + auto [maxSize, min, max, iterations] = GetParam(); + + for (std::size_t i = 0; i < iterations; ++i) { + regenerate(); + std::vector> pairs; + + /* Assign sequential ids to objects. After the sort ids for equivalent + elements should be in ascending order. */ + std::transform( + data_.begin(), data_.end(), std::back_inserter(pairs), [id = std::size_t{0}](auto && val) mutable { + return std::pair{val, ++id}; + }); + + auto comp = [&]([[maybe_unused]] const auto & lhs, [[maybe_unused]] const auto & rhs) -> bool { + return lhs.first > rhs.first; + }; + + peeksort(pairs.begin(), pairs.end(), comp); + ASSERT_TRUE(std::is_sorted(pairs.begin(), pairs.end(), comp)); + + for (auto begin = pairs.begin(), end = pairs.end(); begin < end; ++begin) { + auto key = begin->first; + auto innerEnd = std::find_if_not(begin, end, [key](const auto & lhs) { return lhs.first == key; }); + ASSERT_TRUE(std::is_sorted(begin, innerEnd, [](const auto & lhs, const auto & rhs) { + return lhs.second < rhs.second; + })); + begin = innerEnd; + } + } +} + +using RandomPeekSortParamType = RandomPeekSort::ParamType; + +INSTANTIATE_TEST_SUITE_P( + PeekSort, + RandomPeekSort, + ::testing::Values( + RandomPeekSortParamType{128, std::numeric_limits::min(), std::numeric_limits::max(), 1024}, + RandomPeekSortParamType{7753, -32, 32, 128}, + RandomPeekSortParamType{11719, std::numeric_limits::min(), std::numeric_limits::max(), 64}, + RandomPeekSortParamType{4063, 0, 32, 256}, + RandomPeekSortParamType{771, -8, 8, 2048}, + RandomPeekSortParamType{433, 0, 1, 2048}, + RandomPeekSortParamType{0, 0, 0, 1}, /* empty case */ + RandomPeekSortParamType{ + 1, std::numeric_limits::min(), std::numeric_limits::max(), 1}, /* single element */ + RandomPeekSortParamType{ + 2, std::numeric_limits::min(), std::numeric_limits::max(), 2}, /* two elements */ + RandomPeekSortParamType{55425, std::numeric_limits::min(), std::numeric_limits::max(), 128})); + +template +struct SortProperty : public ::testing::Test +{}; + +using SortPropertyTypes = ::testing::Types; +TYPED_TEST_SUITE(SortProperty, SortPropertyTypes); + +RC_GTEST_TYPED_FIXTURE_PROP(SortProperty, peeksortSorted, (std::vector vec)) +{ + peeksort(vec.begin(), vec.end()); + RC_ASSERT(std::is_sorted(vec.begin(), vec.end())); +} + +RC_GTEST_TYPED_FIXTURE_PROP(SortProperty, peeksortSortedGreater, (std::vector vec)) +{ + auto comp = std::greater(); + peeksort(vec.begin(), vec.end(), comp); + RC_ASSERT(std::is_sorted(vec.begin(), vec.end(), comp)); +} + +RC_GTEST_TYPED_FIXTURE_PROP(SortProperty, insertionsortSorted, (std::vector vec)) +{ + insertionsort(vec.begin(), vec.end()); + RC_ASSERT(std::is_sorted(vec.begin(), vec.end())); +} + +RC_GTEST_PROP(SortProperty, peeksortStability, (std::vector> vec)) +{ + auto comp = [](auto lhs, auto rhs) { return lhs.first < rhs.first; }; + auto copy = vec; + std::stable_sort(copy.begin(), copy.end(), comp); + peeksort(vec.begin(), vec.end(), comp); + RC_ASSERT(copy == vec); +} + +RC_GTEST_TYPED_FIXTURE_PROP(SortProperty, peeksortSortedLinearComparisonComplexity, (std::vector vec)) +{ + peeksort(vec.begin(), vec.end()); + RC_ASSERT(std::is_sorted(vec.begin(), vec.end())); + std::size_t comparisonCount = 0; + auto countingComp = [&](auto lhs, auto rhs) { + ++comparisonCount; + return lhs < rhs; + }; + + peeksort(vec.begin(), vec.end(), countingComp); + + /* In the sorted case comparison complexify should be linear. */ + RC_ASSERT(comparisonCount <= vec.size()); +} + +} // namespace nix diff --git a/src/libutil-tests/strings.cc b/src/libutil-tests/strings.cc index f5af4e0ff..bf1f66025 100644 --- a/src/libutil-tests/strings.cc +++ b/src/libutil-tests/strings.cc @@ -106,7 +106,7 @@ TEST(concatMapStringsSep, two) TEST(concatMapStringsSep, map) { - std::map strings; + StringMap strings; strings["this"] = "that"; strings["1"] = "one"; diff --git a/src/libutil-tests/url.cc b/src/libutil-tests/url.cc index 4c089c106..c93a96d84 100644 --- a/src/libutil-tests/url.cc +++ b/src/libutil-tests/url.cc @@ -5,8 +5,8 @@ namespace nix { /* ----------- tests for url.hh --------------------------------------------------*/ - std::string print_map(std::map m) { - std::map::iterator it; + std::string print_map(StringMap m) { + StringMap::iterator it; std::string s = "{ "; for (it = m.begin(); it != m.end(); ++it) { s += "{ "; diff --git a/src/libutil/archive.cc b/src/libutil/archive.cc index 487873ce6..9069e4b49 100644 --- a/src/libutil/archive.cc +++ b/src/libutil/archive.cc @@ -72,7 +72,7 @@ void SourceAccessor::dumpPath( /* If we're on a case-insensitive system like macOS, undo the case hack applied by restorePath(). */ - std::map unhacked; + StringMap unhacked; for (auto & i : readDirectory(path)) if (archiveSettings.useCaseHack) { std::string name(i.first); diff --git a/src/libutil/current-process.cc b/src/libutil/current-process.cc index 4cc5a4218..1afefbcb2 100644 --- a/src/libutil/current-process.cc +++ b/src/libutil/current-process.cc @@ -16,7 +16,12 @@ #ifdef __linux__ # include # include "nix/util/cgroup.hh" -# include "nix/util/namespaces.hh" +# include "nix/util/linux-namespaces.hh" +#endif + +#ifdef __FreeBSD__ +# include +# include #endif namespace nix { @@ -115,6 +120,24 @@ std::optional getSelfExe() return buf; else return std::nullopt; + #elif defined(__FreeBSD__) + int sysctlName[] = { + CTL_KERN, + KERN_PROC, + KERN_PROC_PATHNAME, + -1, + }; + size_t pathLen = 0; + if (sysctl(sysctlName, sizeof(sysctlName) / sizeof(sysctlName[0]), nullptr, &pathLen, nullptr, 0) < 0) { + return std::nullopt; + } + + std::vector path(pathLen); + if (sysctl(sysctlName, sizeof(sysctlName) / sizeof(sysctlName[0]), path.data(), &pathLen, nullptr, 0) < 0) { + return std::nullopt; + } + + return Path(path.begin(), path.end()); #else return std::nullopt; #endif diff --git a/src/libutil/environment-variables.cc b/src/libutil/environment-variables.cc index 0b668f125..adae17734 100644 --- a/src/libutil/environment-variables.cc +++ b/src/libutil/environment-variables.cc @@ -21,9 +21,9 @@ std::optional getEnvNonEmpty(const std::string & key) return value; } -std::map getEnv() +StringMap getEnv() { - std::map env; + StringMap env; for (size_t i = 0; environ[i]; ++i) { auto s = environ[i]; auto eq = strchr(s, '='); @@ -41,7 +41,7 @@ void clearEnv() unsetenv(name.first.c_str()); } -void replaceEnv(const std::map & newEnv) +void replaceEnv(const StringMap & newEnv) { clearEnv(); for (auto & newEnvVar : newEnv) diff --git a/src/libutil/error.cc b/src/libutil/error.cc index 0ceaa4e76..049555ea3 100644 --- a/src/libutil/error.cc +++ b/src/libutil/error.cc @@ -13,7 +13,7 @@ namespace nix { -void BaseError::addTrace(std::shared_ptr && e, HintFmt hint, TracePrint print) +void BaseError::addTrace(std::shared_ptr && e, HintFmt hint, TracePrint print) { err.traces.push_front(Trace { .pos = std::move(e), .hint = hint, .print = print }); } @@ -146,7 +146,7 @@ static bool printUnknownLocations = getEnv("_NIX_EVAL_SHOW_UNKNOWN_LOCATIONS").h * * @return true if a position was printed. */ -static bool printPosMaybe(std::ostream & oss, std::string_view indent, const std::shared_ptr & pos) { +static bool printPosMaybe(std::ostream & oss, std::string_view indent, const std::shared_ptr & pos) { bool hasPos = pos && *pos; if (hasPos) { oss << indent << ANSI_BLUE << "at " ANSI_WARNING << *pos << ANSI_NORMAL << ":"; diff --git a/src/libutil/file-system.cc b/src/libutil/file-system.cc index f2594fbfd..79e6cf354 100644 --- a/src/libutil/file-system.cc +++ b/src/libutil/file-system.cc @@ -8,12 +8,12 @@ #include "nix/util/util.hh" #include +#include #include #include #include #include #include -#include #include #include @@ -23,14 +23,15 @@ #include +#ifdef __FreeBSD__ +# include +# include +#endif + #ifdef _WIN32 # include #endif -#include "nix/util/strings-inline.hh" - -#include "util-config-private.hh" - namespace nix { DirectoryIterator::DirectoryIterator(const std::filesystem::path& p) { @@ -375,6 +376,13 @@ void syncParent(const Path & path) fd.fsync(); } +#ifdef __FreeBSD__ +#define MOUNTEDPATHS_PARAM , std::set &mountedPaths +#define MOUNTEDPATHS_ARG , mountedPaths +#else +#define MOUNTEDPATHS_PARAM +#define MOUNTEDPATHS_ARG +#endif void recursiveSync(const Path & path) { @@ -421,11 +429,19 @@ void recursiveSync(const Path & path) } -static void _deletePath(Descriptor parentfd, const std::filesystem::path & path, uint64_t & bytesFreed, std::exception_ptr & ex) +static void _deletePath(Descriptor parentfd, const std::filesystem::path & path, uint64_t & bytesFreed, std::exception_ptr & ex MOUNTEDPATHS_PARAM) { #ifndef _WIN32 checkInterrupt(); +#ifdef __FreeBSD__ + // In case of emergency (unmount fails for some reason) not recurse into mountpoints. + // This prevents us from tearing up the nullfs-mounted nix store. + if (mountedPaths.find(path) != mountedPaths.end()) { + return; + } +#endif + std::string name(path.filename()); assert(name != "." && name != ".." && !name.empty()); @@ -480,7 +496,7 @@ static void _deletePath(Descriptor parentfd, const std::filesystem::path & path, checkInterrupt(); std::string childName = dirent->d_name; if (childName == "." || childName == "..") continue; - _deletePath(dirfd(dir.get()), path / childName, bytesFreed, ex); + _deletePath(dirfd(dir.get()), path / childName, bytesFreed, ex MOUNTEDPATHS_ARG); } if (errno) throw SysError("reading directory %1%", path); } @@ -503,7 +519,7 @@ static void _deletePath(Descriptor parentfd, const std::filesystem::path & path, #endif } -static void _deletePath(const std::filesystem::path & path, uint64_t & bytesFreed) +static void _deletePath(const std::filesystem::path & path, uint64_t & bytesFreed MOUNTEDPATHS_PARAM) { assert(path.is_absolute()); assert(path.parent_path() != path); @@ -516,7 +532,7 @@ static void _deletePath(const std::filesystem::path & path, uint64_t & bytesFree std::exception_ptr ex; - _deletePath(dirfd.get(), path, bytesFreed, ex); + _deletePath(dirfd.get(), path, bytesFreed, ex MOUNTEDPATHS_ARG); if (ex) std::rethrow_exception(ex); @@ -552,8 +568,20 @@ void createDirs(const std::filesystem::path & path) void deletePath(const std::filesystem::path & path, uint64_t & bytesFreed) { //Activity act(*logger, lvlDebug, "recursively deleting path '%1%'", path); +#ifdef __FreeBSD__ + std::set mountedPaths; + struct statfs *mntbuf; + int count; + if ((count = getmntinfo(&mntbuf, MNT_WAIT)) < 0) { + throw SysError("getmntinfo"); + } + + for (int i = 0; i < count; i++) { + mountedPaths.emplace(mntbuf[i].f_mntonname); + } +#endif bytesFreed = 0; - _deletePath(path, bytesFreed); + _deletePath(path, bytesFreed MOUNTEDPATHS_ARG); } @@ -595,32 +623,41 @@ void AutoDelete::reset(const std::filesystem::path & p, bool recursive) { ////////////////////////////////////////////////////////////////////// +#ifdef __FreeBSD__ +AutoUnmount::AutoUnmount() : del{false} {} + +AutoUnmount::AutoUnmount(Path &p) : path(p), del(true) {} + +AutoUnmount::~AutoUnmount() +{ + try { + if (del) { + if (unmount(path.c_str(), 0) < 0) { + throw SysError("Failed to unmount path %1%", path); + } + } + } catch (...) { + ignoreExceptionInDestructor(); + } +} + +void AutoUnmount::cancel() +{ + del = false; +} +#endif + ////////////////////////////////////////////////////////////////////// std::string defaultTempDir() { return getEnvNonEmpty("TMPDIR").value_or("/tmp"); } -static Path tempName(Path tmpRoot, const Path & prefix, bool includePid, - std::atomic & counter) +Path createTempDir(const Path & tmpRoot, const Path & prefix, mode_t mode) { - tmpRoot = canonPath(tmpRoot.empty() ? defaultTempDir() : tmpRoot, true); - if (includePid) - return fmt("%1%/%2%-%3%-%4%", tmpRoot, prefix, getpid(), counter++); - else - return fmt("%1%/%2%-%3%", tmpRoot, prefix, counter++); -} - -Path createTempDir(const Path & tmpRoot, const Path & prefix, - bool includePid, bool useGlobalCounter, mode_t mode) -{ - static std::atomic globalCounter = 0; - std::atomic localCounter = 0; - auto & counter(useGlobalCounter ? globalCounter : localCounter); - while (1) { checkInterrupt(); - Path tmpDir = tempName(tmpRoot, prefix, includePid, counter); + Path tmpDir = makeTempPath(tmpRoot, prefix); if (mkdir(tmpDir.c_str() #ifndef _WIN32 // TODO abstract mkdir perms for Windows , mode @@ -660,6 +697,14 @@ std::pair createTempFile(const Path & prefix) return {std::move(fd), tmpl}; } +Path makeTempPath(const Path & root, const Path & suffix) +{ + // start the counter at a random value to minimize issues with preexisting temp paths + static std::atomic counter(std::random_device{}()); + auto tmpRoot = canonPath(root.empty() ? defaultTempDir() : root, true); + return fmt("%1%/%2%-%3%-%4%", tmpRoot, suffix, getpid(), counter.fetch_add(1, std::memory_order_relaxed)); +} + void createSymlink(const Path & target, const Path & link) { try { diff --git a/src/libutil/freebsd/freebsd-jail.cc b/src/libutil/freebsd/freebsd-jail.cc new file mode 100644 index 000000000..575f9287e --- /dev/null +++ b/src/libutil/freebsd/freebsd-jail.cc @@ -0,0 +1,52 @@ +#ifdef __FreeBSD__ +# include "nix/util/freebsd-jail.hh" + +# include +# include +# include +# include + +# include "nix/util/error.hh" +# include "nix/util/util.hh" + +namespace nix { + +AutoRemoveJail::AutoRemoveJail() + : del{false} +{ +} + +AutoRemoveJail::AutoRemoveJail(int jid) + : jid(jid) + , del(true) +{ +} + +AutoRemoveJail::~AutoRemoveJail() +{ + try { + if (del) { + if (jail_remove(jid) < 0) { + throw SysError("Failed to remove jail %1%", jid); + } + } + } catch (...) { + ignoreExceptionInDestructor(); + } +} + +void AutoRemoveJail::cancel() +{ + del = false; +} + +void AutoRemoveJail::reset(int j) +{ + del = true; + jid = j; +} + +////////////////////////////////////////////////////////////////////// + +} +#endif diff --git a/src/libutil/freebsd/include/nix/util/freebsd-jail.hh b/src/libutil/freebsd/include/nix/util/freebsd-jail.hh new file mode 100644 index 000000000..cb5abc511 --- /dev/null +++ b/src/libutil/freebsd/include/nix/util/freebsd-jail.hh @@ -0,0 +1,20 @@ +#pragma once +///@file + +#include "nix/util/types.hh" + +namespace nix { + +class AutoRemoveJail +{ + int jid; + bool del; +public: + AutoRemoveJail(int jid); + AutoRemoveJail(); + ~AutoRemoveJail(); + void cancel(); + void reset(int j); +}; + +} diff --git a/src/libutil/freebsd/include/nix/util/meson.build b/src/libutil/freebsd/include/nix/util/meson.build new file mode 100644 index 000000000..4b7d78624 --- /dev/null +++ b/src/libutil/freebsd/include/nix/util/meson.build @@ -0,0 +1,8 @@ +# Public headers directory + +include_dirs += include_directories('../..') + +headers += files( + 'freebsd-jail.hh', + # hack for trailing newline +) diff --git a/src/libutil/freebsd/meson.build b/src/libutil/freebsd/meson.build new file mode 100644 index 000000000..d9b91a03d --- /dev/null +++ b/src/libutil/freebsd/meson.build @@ -0,0 +1,6 @@ +sources += files( + 'freebsd-jail.cc', + # hack for trailing newline +) + +subdir('include/nix/util') diff --git a/src/libutil/hash.cc b/src/libutil/hash.cc index 53942c956..6d279f3c8 100644 --- a/src/libutil/hash.cc +++ b/src/libutil/hash.cc @@ -146,7 +146,7 @@ Hash Hash::parseSRI(std::string_view original) { auto rest = original; - // Parse the has type before the separater, if there was one. + // Parse the has type before the separator, if there was one. auto hashRaw = splitPrefixTo(rest, '-'); if (!hashRaw) throw BadHash("hash '%s' is not SRI", original); diff --git a/src/libutil/hilite.cc b/src/libutil/hilite.cc index cfadd6af9..6d4eb17a1 100644 --- a/src/libutil/hilite.cc +++ b/src/libutil/hilite.cc @@ -23,7 +23,7 @@ std::string hiliteMatches( auto m = *it; size_t start = m.position(); out.append(s.substr(last_end, m.position() - last_end)); - // Merge continous matches + // Merge continuous matches ssize_t end = start + m.length(); while (++it != matches.end() && (*it).position() <= end) { auto n = *it; diff --git a/src/libutil/include/nix/util/args.hh b/src/libutil/include/nix/util/args.hh index f1eb96675..f3ab0b532 100644 --- a/src/libutil/include/nix/util/args.hh +++ b/src/libutil/include/nix/util/args.hh @@ -252,7 +252,7 @@ protected: std::list processedArgs; /** - * Process some positional arugments + * Process some positional arguments * * @param finish: We have parsed everything else, and these are the only * arguments left. Used because we accumulate some "pending args" we might diff --git a/src/libutil/include/nix/util/comparator.hh b/src/libutil/include/nix/util/comparator.hh index 34ba6f453..c3af1758d 100644 --- a/src/libutil/include/nix/util/comparator.hh +++ b/src/libutil/include/nix/util/comparator.hh @@ -16,7 +16,7 @@ /** * Awful hacky generation of the comparison operators by doing a lexicographic - * comparison between the choosen fields. + * comparison between the chosen fields. * * ``` * GENERATE_CMP(ClassName, me->field1, me->field2, ...) diff --git a/src/libutil/include/nix/util/environment-variables.hh b/src/libutil/include/nix/util/environment-variables.hh index d6c7472fc..9b2fab4f4 100644 --- a/src/libutil/include/nix/util/environment-variables.hh +++ b/src/libutil/include/nix/util/environment-variables.hh @@ -34,7 +34,7 @@ std::optional getEnvNonEmpty(const std::string & key); /** * Get the entire environment. */ -std::map getEnv(); +StringMap getEnv(); #ifdef _WIN32 /** @@ -64,6 +64,6 @@ void clearEnv(); /** * Replace the entire environment with the given one. */ -void replaceEnv(const std::map & newEnv); +void replaceEnv(const StringMap & newEnv); } diff --git a/src/libutil/include/nix/util/error.hh b/src/libutil/include/nix/util/error.hh index fa60d4c61..7c96112ea 100644 --- a/src/libutil/include/nix/util/error.hh +++ b/src/libutil/include/nix/util/error.hh @@ -78,7 +78,7 @@ enum struct TracePrint { }; struct Trace { - std::shared_ptr pos; + std::shared_ptr pos; HintFmt hint; TracePrint print = TracePrint::Default; }; @@ -88,7 +88,7 @@ inline std::strong_ordering operator<=>(const Trace& lhs, const Trace& rhs); struct ErrorInfo { Verbosity level; HintFmt msg; - std::shared_ptr pos; + std::shared_ptr pos; std::list traces; /** * Some messages are generated directly by expressions; notably `builtins.warn`, `abort`, `throw`. @@ -172,7 +172,7 @@ public: err.status = status; } - void atPos(std::shared_ptr pos) { + void atPos(std::shared_ptr pos) { err.pos = pos; } @@ -182,12 +182,12 @@ public: } template - void addTrace(std::shared_ptr && e, std::string_view fs, const Args & ... args) + void addTrace(std::shared_ptr && e, std::string_view fs, const Args & ... args) { addTrace(std::move(e), HintFmt(std::string(fs), args...)); } - void addTrace(std::shared_ptr && e, HintFmt hint, TracePrint print = TracePrint::Default); + void addTrace(std::shared_ptr && e, HintFmt hint, TracePrint print = TracePrint::Default); bool hasTrace() const { return !err.traces.empty(); } diff --git a/src/libutil/include/nix/util/executable-path.hh b/src/libutil/include/nix/util/executable-path.hh index 700d296d5..cf6f3b252 100644 --- a/src/libutil/include/nix/util/executable-path.hh +++ b/src/libutil/include/nix/util/executable-path.hh @@ -8,7 +8,7 @@ namespace nix { MakeError(ExecutableLookupError, Error); /** - * @todo rename, it is not just good for execuatable paths, but also + * @todo rename, it is not just good for executable paths, but also * other lists of paths. */ struct ExecutablePath @@ -51,7 +51,7 @@ struct ExecutablePath * * @param exe This must just be a name, and not contain any `/` (or * `\` on Windows). in case it does, per the spec no lookup should - * be perfomed, and the path (it is not just a file name) as is. + * be performed, and the path (it is not just a file name) as is. * This is the caller's respsonsibility. * * This is a pure function, except for the default `isExecutable` diff --git a/src/libutil/include/nix/util/file-descriptor.hh b/src/libutil/include/nix/util/file-descriptor.hh index 4f13a9a8f..e2bcce2a2 100644 --- a/src/libutil/include/nix/util/file-descriptor.hh +++ b/src/libutil/include/nix/util/file-descriptor.hh @@ -68,7 +68,7 @@ static inline int fromDescriptorReadOnly(Descriptor fd) std::string readFile(Descriptor fd); /** - * Wrappers arount read()/write() that read/write exactly the + * Wrappers around read()/write() that read/write exactly the * requested number of bytes. */ void readFull(Descriptor fd, char * buf, size_t count); diff --git a/src/libutil/include/nix/util/file-path-impl.hh b/src/libutil/include/nix/util/file-path-impl.hh index d7c823fd0..1b4dd28f1 100644 --- a/src/libutil/include/nix/util/file-path-impl.hh +++ b/src/libutil/include/nix/util/file-path-impl.hh @@ -11,7 +11,7 @@ namespace nix { /** - * Unix-style path primives. + * Unix-style path primitives. * * Nix'result own "logical" paths are always Unix-style. So this is always * used for that, and additionally used for native paths on Unix. @@ -51,7 +51,7 @@ struct UnixPathTrait * often manipulating them converted to UTF-8 (*) using `char`. * * (Actually neither are guaranteed to be valid unicode; both are - * arbitrary non-0 8- or 16-bit bytes. But for charcters with specifical + * arbitrary non-0 8- or 16-bit bytes. But for characters with specifical * meaning like '/', '\\', ':', etc., we refer to an encoding scheme, * and also for sake of UIs that display paths a text.) */ diff --git a/src/libutil/include/nix/util/file-system.hh b/src/libutil/include/nix/util/file-system.hh index a9a6e43bf..c45cb55aa 100644 --- a/src/libutil/include/nix/util/file-system.hh +++ b/src/libutil/include/nix/util/file-system.hh @@ -6,8 +6,6 @@ */ #include "nix/util/types.hh" -#include "nix/util/error.hh" -#include "nix/util/logging.hh" #include "nix/util/file-descriptor.hh" #include "nix/util/file-path.hh" @@ -18,12 +16,8 @@ #ifdef _WIN32 # include #endif -#include -#include #include -#include -#include #include /** @@ -323,7 +317,7 @@ typedef std::unique_ptr AutoCloseDir; * Create a temporary directory. */ Path createTempDir(const Path & tmpRoot = "", const Path & prefix = "nix", - bool includePid = true, bool useGlobalCounter = true, mode_t mode = 0755); + mode_t mode = 0755); /** * Create a temporary file, returning a file handle and its path. @@ -341,6 +335,14 @@ Path defaultTempDir(); */ bool isExecutableFileAmbient(const std::filesystem::path & exe); +/** + * Return temporary path constructed by appending a suffix to a root path. + * + * The constructed path looks like `--`. To create a + * path nested in a directory, provide a suffix starting with `/`. + */ +Path makeTempPath(const Path & root, const Path & suffix = ".tmp"); + /** * Used in various places. */ @@ -424,4 +426,17 @@ private: std::filesystem::directory_iterator it_; }; +#ifdef __FreeBSD__ +class AutoUnmount +{ + Path path; + bool del; +public: + AutoUnmount(Path&); + AutoUnmount(); + ~AutoUnmount(); + void cancel(); +}; +#endif + } diff --git a/src/libutil/include/nix/util/hash.hh b/src/libutil/include/nix/util/hash.hh index 5dc3d1017..1c7b8ed9c 100644 --- a/src/libutil/include/nix/util/hash.hh +++ b/src/libutil/include/nix/util/hash.hh @@ -38,7 +38,7 @@ enum struct HashFormat : int { /// @brief Lowercase hexadecimal encoding. @see base16Chars Base16, /// @brief ":", format of the SRI integrity attribute. - /// @see W3C recommendation [Subresource Intergrity](https://www.w3.org/TR/SRI/). + /// @see W3C recommendation [Subresource Integrity](https://www.w3.org/TR/SRI/). SRI }; @@ -68,7 +68,7 @@ struct Hash /** * Parse a hash from a string representation like the above, except the - * type prefix is mandatory is there is no separate arguement. + * type prefix is mandatory is there is no separate argument. */ static Hash parseAnyPrefixed(std::string_view s); diff --git a/src/libutil/include/nix/util/logging.hh b/src/libutil/include/nix/util/logging.hh index 920e9fb20..2b71c4171 100644 --- a/src/libutil/include/nix/util/logging.hh +++ b/src/libutil/include/nix/util/logging.hh @@ -57,9 +57,11 @@ struct LoggerSettings : Config Setting jsonLogPath{ this, "", "json-log-path", R"( - A path to which JSON records of Nix's log output are + A file or Unix domain socket to which JSON records of Nix's log output are written, in the same format as `--log-format internal-json` (without the `@nix ` prefixes on each line). + Concurrent writes to the same file by multiple Nix processes are not supported and + may result in interleaved or corrupted log records. )"}; }; diff --git a/src/libutil/include/nix/util/lru-cache.hh b/src/libutil/include/nix/util/lru-cache.hh index c9bcd7ee0..0834a8e74 100644 --- a/src/libutil/include/nix/util/lru-cache.hh +++ b/src/libutil/include/nix/util/lru-cache.hh @@ -33,6 +33,18 @@ private: Data data; LRU lru; + /** + * Move this item to the back of the LRU list. + */ + void promote(LRU::iterator it) + { + /* Think of std::list iterators as stable pointers to the list node, + * which never get invalidated. Thus, we can reuse the same lru list + * element and just splice it to the back of the list without the need + * to update its value in the key -> list iterator map. */ + lru.splice(/*pos=*/lru.end(), /*other=*/lru, it); + } + public: LRUCache(size_t capacity) @@ -83,7 +95,9 @@ public: /** * Look up an item in the cache. If it exists, it becomes the most * recently used item. - * */ + * + * @returns corresponding cache entry, std::nullopt if it's not in the cache + */ template std::optional get(const K & key) { @@ -91,20 +105,30 @@ public: if (i == data.end()) return {}; - /** - * Move this item to the back of the LRU list. - * - * Think of std::list iterators as stable pointers to the list node, - * which never get invalidated. Thus, we can reuse the same lru list - * element and just splice it to the back of the list without the need - * to update its value in the key -> list iterator map. - */ auto & [it, value] = i->second; - lru.splice(/*pos=*/lru.end(), /*other=*/lru, it.it); - + promote(it.it); return value; } + /** + * Look up an item in the cache. If it exists, it becomes the most + * recently used item. + * + * @returns mutable pointer to the corresponding cache entry, nullptr if + * it's not in the cache + */ + template + Value * getOrNullptr(const K & key) + { + auto i = data.find(key); + if (i == data.end()) + return nullptr; + + auto & [it, value] = i->second; + promote(it.it); + return &value; + } + size_t size() const noexcept { return data.size(); diff --git a/src/libutil/include/nix/util/meson.build b/src/libutil/include/nix/util/meson.build index 3dacfafc6..e3be662a3 100644 --- a/src/libutil/include/nix/util/meson.build +++ b/src/libutil/include/nix/util/meson.build @@ -1,6 +1,6 @@ # Public headers directory -include_dirs = [include_directories('../..')] +include_dirs = [ include_directories('../..') ] headers = files( 'abstract-setting-to-json.hh', @@ -61,12 +61,13 @@ headers = files( 'signals.hh', 'signature/local-keys.hh', 'signature/signer.hh', + 'sort.hh', 'source-accessor.hh', 'source-path.hh', 'split.hh', 'std-hash.hh', - 'strings.hh', 'strings-inline.hh', + 'strings.hh', 'suggestions.hh', 'sync.hh', 'tarfile.hh', diff --git a/src/libutil/include/nix/util/pos-idx.hh b/src/libutil/include/nix/util/pos-idx.hh index 4f305bdd8..423f8b032 100644 --- a/src/libutil/include/nix/util/pos-idx.hh +++ b/src/libutil/include/nix/util/pos-idx.hh @@ -8,7 +8,7 @@ namespace nix { class PosIdx { - friend struct LazyPosAcessors; + friend struct LazyPosAccessors; friend class PosTable; friend class std::hash; diff --git a/src/libutil/include/nix/util/pos-table.hh b/src/libutil/include/nix/util/pos-table.hh index ef170e0f1..f64466c21 100644 --- a/src/libutil/include/nix/util/pos-table.hh +++ b/src/libutil/include/nix/util/pos-table.hh @@ -4,6 +4,7 @@ #include #include +#include "nix/util/lru-cache.hh" #include "nix/util/pos-idx.hh" #include "nix/util/position.hh" #include "nix/util/sync.hh" @@ -37,10 +38,20 @@ public: }; private: + /** + * Vector of byte offsets (in the virtual input buffer) of initial line character's position. + * Sorted by construction. Binary search over it allows for efficient translation of arbitrary + * byte offsets in the virtual input buffer to its line + column position. + */ using Lines = std::vector; + /** + * Cache from byte offset in the virtual buffer of Origins -> @ref Lines in that origin. + */ + using LinesCache = LRUCache; std::map origins; - mutable Sync> lines; + + mutable Sync linesCache; const Origin * resolve(PosIdx p) const { @@ -56,6 +67,11 @@ private: } public: + PosTable(std::size_t linesCacheCapacity = 65536) + : linesCache(linesCacheCapacity) + { + } + Origin addOrigin(Pos::Origin origin, size_t size) { uint32_t offset = 0; diff --git a/src/libutil/include/nix/util/position.hh b/src/libutil/include/nix/util/position.hh index f9c984976..34cf86392 100644 --- a/src/libutil/include/nix/util/position.hh +++ b/src/libutil/include/nix/util/position.hh @@ -43,15 +43,10 @@ struct Pos Pos() { } Pos(uint32_t line, uint32_t column, Origin origin) : line(line), column(column), origin(origin) { } - Pos(Pos & other) = default; - Pos(const Pos & other) = default; - Pos(Pos && other) = default; - Pos(const Pos * other); explicit operator bool() const { return line > 0; } - /* TODO: Why std::shared_ptr and not std::shared_ptr? */ - operator std::shared_ptr() const; + operator std::shared_ptr() const; /** * Return the contents of the source file. diff --git a/src/libutil/include/nix/util/processes.hh b/src/libutil/include/nix/util/processes.hh index ef7bddf2f..ab5f23e49 100644 --- a/src/libutil/include/nix/util/processes.hh +++ b/src/libutil/include/nix/util/processes.hh @@ -103,7 +103,7 @@ struct RunOptions std::optional gid; #endif std::optional chdir; - std::optional> environment; + std::optional environment; std::optional input; Source * standardIn = nullptr; Sink * standardOut = nullptr; diff --git a/src/libutil/include/nix/util/serialise.hh b/src/libutil/include/nix/util/serialise.hh index d28c8e9a6..97fdddae3 100644 --- a/src/libutil/include/nix/util/serialise.hh +++ b/src/libutil/include/nix/util/serialise.hh @@ -564,7 +564,7 @@ struct FramedSink : nix::BufferedSink void writeUnbuffered(std::string_view data) override { - /* Don't send more data if an error has occured. */ + /* Don't send more data if an error has occurred. */ checkError(); to << data.size(); diff --git a/src/libutil/include/nix/util/sort.hh b/src/libutil/include/nix/util/sort.hh new file mode 100644 index 000000000..0affdf3ce --- /dev/null +++ b/src/libutil/include/nix/util/sort.hh @@ -0,0 +1,299 @@ +#pragma once + +#include +#include +#include +#include +#include +#include + +/** + * @file + * + * In-house implementation of sorting algorithms. Used for cases when several properties + * need to be upheld regardless of the stdlib implementation of std::sort or + * std::stable_sort. + * + * PeekSort implementation is adapted from reference implementation + * https://github.com/sebawild/powersort licensed under the MIT License. + * + */ + +/* PeekSort attribution: + * + * MIT License + * + * Copyright (c) 2022 Sebastian Wild + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +namespace nix { + +/** + * Merge sorted runs [begin, middle) with [middle, end) in-place [begin, end). + * Uses a temporary working buffer by first copying [begin, end) to it. + * + * @param begin Start of the first subrange to be sorted. + * @param middle End of the first sorted subrange and the start of the second. + * @param end End of the second sorted subrange. + * @param workingBegin Start of the working buffer. + * @param comp Comparator implementing an operator()(const ValueType& lhs, const ValueType& rhs). + * + * @pre workingBegin buffer must have at least std::distance(begin, end) elements. + * + * @note We can't use std::inplace_merge or std::merge, because their behavior + * is undefined if the comparator is not strict weak ordering. + */ +template< + std::forward_iterator Iter, + std::random_access_iterator BufIter, + typename Comparator = std::less>> +void mergeSortedRunsInPlace(Iter begin, Iter middle, Iter end, BufIter workingBegin, Comparator comp = {}) +{ + const BufIter workingMiddle = std::move(begin, middle, workingBegin); + const BufIter workingEnd = std::move(middle, end, workingMiddle); + + Iter output = begin; + BufIter workingLeft = workingBegin; + BufIter workingRight = workingMiddle; + + while (workingLeft != workingMiddle && workingRight != workingEnd) { + /* Note the inversion here !comp(...., ....). This is required for the merge to be stable. + If a == b where a if from the left part and b is the the right, then we have to pick + a. */ + *output++ = !comp(*workingRight, *workingLeft) ? std::move(*workingLeft++) : std::move(*workingRight++); + } + + std::move(workingLeft, workingMiddle, output); + std::move(workingRight, workingEnd, output); +} + +/** + * Simple insertion sort. + * + * Does not require that the std::iter_value_t is copyable. + * + * @param begin Start of the range to sort. + * @param end End of the range to sort. + * @comp Comparator the defines the ordering. Order of elements if the comp is not strict weak ordering + * is not specified. + * @throws Nothing. + * + * Note on exception safety: this function provides weak exception safety + * guarantees. To elaborate: if the comparator throws or move assignment + * throws (value type is not nothrow_move_assignable) then the range is left in + * a consistent, but unspecified state. + * + * @note This can't be implemented in terms of binary search if the strict weak ordering + * needs to be handled in a well-defined but unspecified manner. + */ +template>> +void insertionsort(Iter begin, Iter end, Comparator comp = {}) +{ + if (begin == end) + return; + for (Iter current = std::next(begin); current != end; ++current) { + for (Iter insertionPoint = current; + insertionPoint != begin && comp(*insertionPoint, *std::prev(insertionPoint)); + --insertionPoint) { + std::swap(*insertionPoint, *std::prev(insertionPoint)); + } + } +} + +/** + * Find maximal i <= end such that [begin, i) is strictly decreasing according + * to the specified comparator. + */ +template>> +Iter strictlyDecreasingPrefix(Iter begin, Iter end, Comparator && comp = {}) +{ + if (begin == end) + return begin; + while (std::next(begin) != end && /* *std::next(begin) < begin */ + comp(*std::next(begin), *begin)) + ++begin; + return std::next(begin); +} + +/** + * Find minimal i >= start such that [i, end) is strictly decreasing according + * to the specified comparator. + */ +template>> +Iter strictlyDecreasingSuffix(Iter begin, Iter end, Comparator && comp = {}) +{ + if (begin == end) + return end; + while (std::prev(end) > begin && /* *std::prev(end) < *std::prev(end, 2) */ + comp(*std::prev(end), *std::prev(end, 2))) + --end; + return std::prev(end); +} + +/** + * Find maximal i <= end such that [begin, i) is weakly increasing according + * to the specified comparator. + */ +template>> +Iter weaklyIncreasingPrefix(Iter begin, Iter end, Comparator && comp = {}) +{ + return strictlyDecreasingPrefix(begin, end, std::not_fn(std::forward(comp))); +} + +/** + * Find minimal i >= start such that [i, end) is weakly increasing according + * to the specified comparator. + */ +template>> +Iter weaklyIncreasingSuffix(Iter begin, Iter end, Comparator && comp = {}) +{ + return strictlyDecreasingSuffix(begin, end, std::not_fn(std::forward(comp))); +} + +/** + * Peeksort stable sorting algorithm. Sorts elements in-place. + * Allocates additional memory as needed. + * + * @details + * PeekSort is a stable, near-optimal natural mergesort. Most importantly, like any + * other mergesort it upholds the "Ord safety" property. Meaning that even for + * comparator predicates that don't satisfy strict weak ordering it can't result + * in infinite loops/out of bounds memory accesses or other undefined behavior. + * + * As a quick reminder, strict weak ordering relation operator< must satisfy + * the following properties. Keep in mind that in C++ an equvalence relation + * is specified in terms of operator< like so: a ~ b iff !(a < b) && !(b < a). + * + * 1. a < a === false - relation is irreflexive + * 2. a < b, b < c => a < c - transitivity + * 3. a ~ b, a ~ b, b ~ c => a ~ c, transitivity of equivalence + * + * @see https://www.wild-inter.net/publications/munro-wild-2018 + * @see https://github.com/Voultapher/sort-research-rs/blob/main/writeup/sort_safety/text.md#property-analysis + * + * The order of elements when comp is not strict weak ordering is not specified, but + * is not undefined. The output is always some permutation of the input, regardless + * of the comparator provided. + * Relying on ordering in such cases is erroneous, but this implementation + * will happily accept broken comparators and will not crash. + * + * @param begin Start of the range to be sorted. + * @param end End of the range to be sorted. + * @comp comp Comparator implementing an operator()(const ValueType& lhs, const ValueType& rhs). + * + * @throws std::bad_alloc if the temporary buffer can't be allocated. + * + * @return Nothing. + * + * Note on exception safety: this function provides weak exception safety + * guarantees. To elaborate: if the comparator throws or move assignment + * throws (value type is not nothrow_move_assignable) then the range is left in + * a consistent, but unspecified state. + * + */ +template>> +/* ValueType must be default constructible to create the temporary buffer */ + requires std::is_default_constructible_v> +void peeksort(Iter begin, Iter end, Comparator comp = {}) +{ + auto length = std::distance(begin, end); + + /* Special-case very simple inputs. This is identical to how libc++ does it. */ + switch (length) { + case 0: + [[fallthrough]]; + case 1: + return; + case 2: + if (comp(*--end, *begin)) /* [a, b], b < a */ + std::swap(*begin, *end); + return; + } + + using ValueType = std::iter_value_t; + auto workingBuffer = std::vector(length); + + /* + * sorts [begin, end), assuming that [begin, leftRunEnd) and + * [rightRunBegin, end) are sorted. + * Modified implementation from: + * https://github.com/sebawild/powersort/blob/1d078b6be9023e134c4f8f6de88e2406dc681e89/src/sorts/peeksort.h + */ + auto peeksortImpl = [&workingBuffer, + &comp](auto & peeksortImpl, Iter begin, Iter end, Iter leftRunEnd, Iter rightRunBegin) { + if (leftRunEnd == end || rightRunBegin == begin) + return; + + /* Dispatch to simpler insertion sort implementation for smaller cases + Cut-off limit is the same as in libstdc++ + https://github.com/gcc-mirror/gcc/blob/d9375e490072d1aae73a93949aa158fcd2a27018/libstdc%2B%2B-v3/include/bits/stl_algo.h#L4977 + */ + static constexpr std::size_t insertionsortThreshold = 16; + size_t length = std::distance(begin, end); + if (length <= insertionsortThreshold) + return insertionsort(begin, end, comp); + + Iter middle = std::next(begin, (length / 2)); /* Middle split between m and m - 1 */ + + if (middle <= leftRunEnd) { + /* |XXXXXXXX|XX X| */ + peeksortImpl(peeksortImpl, leftRunEnd, end, std::next(leftRunEnd), rightRunBegin); + mergeSortedRunsInPlace(begin, leftRunEnd, end, workingBuffer.begin(), comp); + return; + } else if (middle >= rightRunBegin) { + /* |XX X|XXXXXXXX| */ + peeksortImpl(peeksortImpl, begin, rightRunBegin, leftRunEnd, std::prev(rightRunBegin)); + mergeSortedRunsInPlace(begin, rightRunBegin, end, workingBuffer.begin(), comp); + return; + } + + /* Find middle run, i.e., run containing m - 1 */ + Iter i, j; + + if (!comp(*middle, *std::prev(middle)) /* *std::prev(middle) <= *middle */) { + i = weaklyIncreasingSuffix(leftRunEnd, middle, comp); + j = weaklyIncreasingPrefix(std::prev(middle), rightRunBegin, comp); + } else { + i = strictlyDecreasingSuffix(leftRunEnd, middle, comp); + j = strictlyDecreasingPrefix(std::prev(middle), rightRunBegin, comp); + std::reverse(i, j); + } + + if (i == begin && j == end) + return; /* single run */ + + if (middle - i < j - middle) { + /* |XX x|xxxx X| */ + peeksortImpl(peeksortImpl, begin, i, leftRunEnd, std::prev(i)); + peeksortImpl(peeksortImpl, i, end, j, rightRunBegin); + mergeSortedRunsInPlace(begin, i, end, workingBuffer.begin(), comp); + } else { + /* |XX xxx|x X| */ + peeksortImpl(peeksortImpl, begin, j, leftRunEnd, i); + peeksortImpl(peeksortImpl, j, end, std::next(j), rightRunBegin); + mergeSortedRunsInPlace(begin, j, end, workingBuffer.begin(), comp); + } + }; + + peeksortImpl(peeksortImpl, begin, end, /*leftRunEnd=*/begin, /*rightRunBegin=*/end); +} + +} diff --git a/src/libutil/include/nix/util/source-accessor.hh b/src/libutil/include/nix/util/source-accessor.hh index 4084b3bdc..c0e8528db 100644 --- a/src/libutil/include/nix/util/source-accessor.hh +++ b/src/libutil/include/nix/util/source-accessor.hh @@ -54,7 +54,7 @@ struct SourceAccessor : std::enable_shared_from_this * * @note Unlike Unix, this method should *not* follow symlinks. Nix * by default wants to manipulate symlinks explicitly, and not - * implictly follow them, as they are frequently untrusted user data + * implicitly follow them, as they are frequently untrusted user data * and thus may point to arbitrary locations. Acting on the targets * targets of symlinks should only occasionally be done, and only * with care. diff --git a/src/libutil/include/nix/util/sync.hh b/src/libutil/include/nix/util/sync.hh index 0c3e1f528..4b9d546d2 100644 --- a/src/libutil/include/nix/util/sync.hh +++ b/src/libutil/include/nix/util/sync.hh @@ -39,6 +39,7 @@ public: SyncBase() { } SyncBase(const T & data) : data(data) { } SyncBase(T && data) noexcept : data(std::move(data)) { } + SyncBase(SyncBase && other) noexcept : data(std::move(*other.lock())) { } template class Lock diff --git a/src/libutil/include/nix/util/types.hh b/src/libutil/include/nix/util/types.hh index 5139256ca..edb34f5e2 100644 --- a/src/libutil/include/nix/util/types.hh +++ b/src/libutil/include/nix/util/types.hh @@ -12,8 +12,25 @@ namespace nix { typedef std::list Strings; -typedef std::map StringMap; -typedef std::map StringPairs; + +/** + * Alias to ordered std::string -> std::string map container with transparent comparator. + * + * Used instead of std::map to use C++14 N3657 [1] + * heterogenous lookup consistently across the whole codebase. + * Transparent comparators get rid of creation of unnecessary + * temporary variables when looking up keys by `std::string_view` + * or C-style `const char *` strings. + * + * [1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3657.htm + */ +using StringMap = std::map>; +/** + * Alias to an ordered map of std::string -> std::string. Uses transparent comparator. + * + * @see StringMap + */ +using StringPairs = StringMap; /** * Alias to ordered set container with transparent comparator. diff --git a/src/libutil/include/nix/util/unix-domain-socket.hh b/src/libutil/include/nix/util/unix-domain-socket.hh index ae98e9923..2dce9f9f2 100644 --- a/src/libutil/include/nix/util/unix-domain-socket.hh +++ b/src/libutil/include/nix/util/unix-domain-socket.hh @@ -9,6 +9,8 @@ #endif #include +#include + namespace nix { /** @@ -78,7 +80,12 @@ void bind(Socket fd, const std::string & path); /** * Connect to a Unix domain socket. */ -void connect(Socket fd, const std::string & path); +void connect(Socket fd, const std::filesystem::path & path); + +/** + * Connect to a Unix domain socket. + */ +AutoCloseFD connect(const std::filesystem::path & path); /** * Connect to a Unix domain socket. diff --git a/src/libutil/include/nix/util/url.hh b/src/libutil/include/nix/util/url.hh index ced846787..a509f06da 100644 --- a/src/libutil/include/nix/util/url.hh +++ b/src/libutil/include/nix/util/url.hh @@ -10,7 +10,7 @@ struct ParsedURL std::string scheme; std::optional authority; std::string path; - std::map query; + StringMap query; std::string fragment; std::string to_string() const; @@ -30,9 +30,9 @@ MakeError(BadURL, Error); std::string percentDecode(std::string_view in); std::string percentEncode(std::string_view s, std::string_view keep=""); -std::map decodeQuery(const std::string & query); +StringMap decodeQuery(const std::string & query); -std::string encodeQuery(const std::map & query); +std::string encodeQuery(const StringMap & query); ParsedURL parseURL(const std::string & url); diff --git a/src/libutil/include/nix/util/xml-writer.hh b/src/libutil/include/nix/util/xml-writer.hh index 74f53b7ca..ae5a6ced7 100644 --- a/src/libutil/include/nix/util/xml-writer.hh +++ b/src/libutil/include/nix/util/xml-writer.hh @@ -10,7 +10,7 @@ namespace nix { -typedef std::map XMLAttrs; +typedef std::map> XMLAttrs; class XMLWriter diff --git a/src/libutil/linux/cgroup.cc b/src/libutil/linux/cgroup.cc index 4acfe82f1..c82fdc11c 100644 --- a/src/libutil/linux/cgroup.cc +++ b/src/libutil/linux/cgroup.cc @@ -31,9 +31,9 @@ std::optional getCgroupFS() } // FIXME: obsolete, check for cgroup2 -std::map getCgroups(const Path & cgroupFile) +StringMap getCgroups(const Path & cgroupFile) { - std::map cgroups; + StringMap cgroups; for (auto & line : tokenizeString>(readFile(cgroupFile), "\n")) { static std::regex regex("([0-9]+):([^:]*):(.*)"); diff --git a/src/libutil/linux/include/nix/util/cgroup.hh b/src/libutil/linux/include/nix/util/cgroup.hh index 6a41c6b44..eb49c3419 100644 --- a/src/libutil/linux/include/nix/util/cgroup.hh +++ b/src/libutil/linux/include/nix/util/cgroup.hh @@ -10,7 +10,7 @@ namespace nix { std::optional getCgroupFS(); -std::map getCgroups(const Path & cgroupFile); +StringMap getCgroups(const Path & cgroupFile); struct CgroupStats { diff --git a/src/libutil/linux/include/nix/util/namespaces.hh b/src/libutil/linux/include/nix/util/linux-namespaces.hh similarity index 100% rename from src/libutil/linux/include/nix/util/namespaces.hh rename to src/libutil/linux/include/nix/util/linux-namespaces.hh diff --git a/src/libutil/linux/include/nix/util/meson.build b/src/libutil/linux/include/nix/util/meson.build index 9587aa916..ec7030c49 100644 --- a/src/libutil/linux/include/nix/util/meson.build +++ b/src/libutil/linux/include/nix/util/meson.build @@ -4,5 +4,6 @@ include_dirs += include_directories('../..') headers += files( 'cgroup.hh', - 'namespaces.hh', + 'linux-namespaces.hh', + # hack for trailing newline ) diff --git a/src/libutil/linux/namespaces.cc b/src/libutil/linux/linux-namespaces.cc similarity index 99% rename from src/libutil/linux/namespaces.cc rename to src/libutil/linux/linux-namespaces.cc index 405866c0b..93f299076 100644 --- a/src/libutil/linux/namespaces.cc +++ b/src/libutil/linux/linux-namespaces.cc @@ -1,3 +1,4 @@ +#include "nix/util/linux-namespaces.hh" #include "nix/util/current-process.hh" #include "nix/util/util.hh" #include "nix/util/finally.hh" diff --git a/src/libutil/linux/meson.build b/src/libutil/linux/meson.build index bfda8b1a6..230dd46f3 100644 --- a/src/libutil/linux/meson.build +++ b/src/libutil/linux/meson.build @@ -1,6 +1,7 @@ sources += files( 'cgroup.cc', - 'namespaces.cc', + 'linux-namespaces.cc', + # hack for trailing newline ) subdir('include/nix/util') diff --git a/src/libutil/logging.cc b/src/libutil/logging.cc index 7aad5de2c..5a14b63be 100644 --- a/src/libutil/logging.cc +++ b/src/libutil/logging.cc @@ -166,7 +166,7 @@ Activity::Activity(Logger & logger, Verbosity lvl, ActivityType type, logger.startActivity(id, lvl, type, s, fields, parent); } -void to_json(nlohmann::json & json, std::shared_ptr pos) +void to_json(nlohmann::json & json, std::shared_ptr pos) { if (pos) { json["line"] = pos->line; @@ -334,7 +334,7 @@ std::unique_ptr makeJSONLogger(const std::filesystem::path & path, bool AutoCloseFD fd = std::filesystem::is_socket(path) ? connect(path) - : toDescriptor(open(path.c_str(), O_CREAT | O_APPEND | O_WRONLY, 0644)); + : toDescriptor(open(path.string().c_str(), O_CREAT | O_APPEND | O_WRONLY, 0644)); if (!fd) throw SysError("opening log file %1%", path); diff --git a/src/libutil/meson.build b/src/libutil/meson.build index 04ca06eee..f5ad2b1f6 100644 --- a/src/libutil/meson.build +++ b/src/libutil/meson.build @@ -169,6 +169,10 @@ if host_machine.system() == 'linux' subdir('linux') endif +if host_machine.system() == 'freebsd' + subdir('freebsd') +endif + if host_machine.system() == 'windows' subdir('windows') else diff --git a/src/libutil/package.nix b/src/libutil/package.nix index 5bbbbfd96..ba580b1b3 100644 --- a/src/libutil/package.nix +++ b/src/libutil/package.nix @@ -37,6 +37,8 @@ mkMesonLibrary (finalAttrs: { ./include/nix/util/meson.build ./linux/meson.build ./linux/include/nix/util/meson.build + ./freebsd/meson.build + ./freebsd/include/nix/util/meson.build ./unix/meson.build ./unix/include/nix/util/meson.build ./windows/meson.build diff --git a/src/libutil/pos-table.cc b/src/libutil/pos-table.cc index 5a61ffbc5..e50b12873 100644 --- a/src/libutil/pos-table.cc +++ b/src/libutil/pos-table.cc @@ -15,21 +15,35 @@ Pos PosTable::operator[](PosIdx p) const const auto offset = origin->offsetOf(p); Pos result{0, 0, origin->origin}; - auto lines = this->lines.lock(); - auto linesForInput = (*lines)[origin->offset]; + auto linesCache = this->linesCache.lock(); - if (linesForInput.empty()) { - auto source = result.getSource().value_or(""); - const char * begin = source.data(); - for (Pos::LinesIterator it(source), end; it != end; it++) - linesForInput.push_back(it->data() - begin); - if (linesForInput.empty()) - linesForInput.push_back(0); + /* Try the origin's line cache */ + const auto * linesForInput = linesCache->getOrNullptr(origin->offset); + + auto fillCacheForOrigin = [](std::string_view content) { + auto contentLines = Lines(); + + const char * begin = content.data(); + for (Pos::LinesIterator it(content), end; it != end; it++) + contentLines.push_back(it->data() - begin); + if (contentLines.empty()) + contentLines.push_back(0); + + return contentLines; + }; + + /* Calculate line offsets and fill the cache */ + if (!linesForInput) { + auto originContent = result.getSource().value_or(""); + linesCache->upsert(origin->offset, fillCacheForOrigin(originContent)); + linesForInput = linesCache->getOrNullptr(origin->offset); } - // as above: the first line starts at byte 0 and is always present - auto lineStartOffset = std::prev(std::upper_bound(linesForInput.begin(), linesForInput.end(), offset)); - result.line = 1 + (lineStartOffset - linesForInput.begin()); + assert(linesForInput); + + // as above: the first line starts at byte 0 and is always present + auto lineStartOffset = std::prev(std::upper_bound(linesForInput->begin(), linesForInput->end(), offset)); + result.line = 1 + (lineStartOffset - linesForInput->begin()); result.column = 1 + (offset - *lineStartOffset); return result; } diff --git a/src/libutil/position.cc b/src/libutil/position.cc index dfe0e2abb..a1d9460ed 100644 --- a/src/libutil/position.cc +++ b/src/libutil/position.cc @@ -2,19 +2,9 @@ namespace nix { -Pos::Pos(const Pos * other) +Pos::operator std::shared_ptr() const { - if (!other) { - return; - } - line = other->line; - column = other->column; - origin = other->origin; -} - -Pos::operator std::shared_ptr() const -{ - return std::make_shared(&*this); + return std::make_shared(*this); } std::optional Pos::getCodeLines() const diff --git a/src/libutil/posix-source-accessor.cc b/src/libutil/posix-source-accessor.cc index 773540e6a..2ce7c88e4 100644 --- a/src/libutil/posix-source-accessor.cc +++ b/src/libutil/posix-source-accessor.cc @@ -141,33 +141,44 @@ SourceAccessor::DirEntries PosixSourceAccessor::readDirectory(const CanonPath & for (auto & entry : DirectoryIterator{makeAbsPath(path)}) { checkInterrupt(); auto type = [&]() -> std::optional { - std::filesystem::file_type nativeType; try { - nativeType = entry.symlink_status().type(); + /* WARNING: We are specifically not calling symlink_status() + * here, because that always translates to `stat` call and + * doesn't make use of any caching. Instead, we have to + * rely on the myriad of `is_*` functions, which actually do + * the caching. If you are in doubt then take a look at the + * libstdc++ implementation [1] and the standard proposal + * about the caching variations of directory_entry [2]. + + * [1]: https://github.com/gcc-mirror/gcc/blob/8ea555b7b4725dbc5d9286f729166cd54ce5b615/libstdc%2B%2B-v3/include/bits/fs_dir.h#L341-L348 + * [2]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0317r1.html + */ + + /* Check for symlink first, because other getters follow symlinks. */ + if (entry.is_symlink()) + return tSymlink; + if (entry.is_regular_file()) + return tRegular; + if (entry.is_directory()) + return tDirectory; + if (entry.is_character_file()) + return tChar; + if (entry.is_block_file()) + return tBlock; + if (entry.is_fifo()) + return tFifo; + if (entry.is_socket()) + return tSocket; + return tUnknown; } catch (std::filesystem::filesystem_error & e) { // We cannot always stat the child. (Ideally there is no // stat because the native directory entry has the type // already, but this isn't always the case.) if (e.code() == std::errc::permission_denied || e.code() == std::errc::operation_not_permitted) return std::nullopt; - else throw; + else + throw; } - - // cannot exhaustively enumerate because implementation-specific - // additional file types are allowed. -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wswitch-enum" - switch (nativeType) { - case std::filesystem::file_type::regular: return Type::tRegular; break; - case std::filesystem::file_type::symlink: return Type::tSymlink; break; - case std::filesystem::file_type::directory: return Type::tDirectory; break; - case std::filesystem::file_type::character: return Type::tChar; break; - case std::filesystem::file_type::block: return Type::tBlock; break; - case std::filesystem::file_type::fifo: return Type::tFifo; break; - case std::filesystem::file_type::socket: return Type::tSocket; break; - default: return tUnknown; - } -#pragma GCC diagnostic pop }(); res.emplace(entry.path().filename().string(), type); } diff --git a/src/libutil/source-accessor.cc b/src/libutil/source-accessor.cc index b9ebc82b6..fc9752456 100644 --- a/src/libutil/source-accessor.cc +++ b/src/libutil/source-accessor.cc @@ -1,5 +1,5 @@ +#include #include "nix/util/source-accessor.hh" -#include "nix/util/archive.hh" namespace nix { diff --git a/src/libutil/unix-domain-socket.cc b/src/libutil/unix-domain-socket.cc index 0e8c21d66..2422caf14 100644 --- a/src/libutil/unix-domain-socket.cc +++ b/src/libutil/unix-domain-socket.cc @@ -29,7 +29,6 @@ AutoCloseFD createUnixDomainSocket() return fdSocket; } - AutoCloseFD createUnixDomainSocket(const Path & path, mode_t mode) { auto fdSocket = nix::createUnixDomainSocket(); @@ -100,7 +99,6 @@ static void bindConnectProcHelper( } } - void bind(Socket fd, const std::string & path) { unlink(path.c_str()); @@ -108,10 +106,9 @@ void bind(Socket fd, const std::string & path) bindConnectProcHelper("bind", ::bind, fd, path); } - -void connect(Socket fd, const std::string & path) +void connect(Socket fd, const std::filesystem::path & path) { - bindConnectProcHelper("connect", ::connect, fd, path); + bindConnectProcHelper("connect", ::connect, fd, path.string()); } AutoCloseFD connect(const std::filesystem::path & path) diff --git a/src/libutil/unix/file-descriptor.cc b/src/libutil/unix/file-descriptor.cc index e6d0c255d..0051e8aa4 100644 --- a/src/libutil/unix/file-descriptor.cc +++ b/src/libutil/unix/file-descriptor.cc @@ -15,7 +15,7 @@ namespace nix { namespace { // This function is needed to handle non-blocking reads/writes. This is needed in the buildhook, because -// somehow the json logger file descriptor ends up beeing non-blocking and breaks remote-building. +// somehow the json logger file descriptor ends up being non-blocking and breaks remote-building. // TODO: get rid of buildhook and remove this function again (https://github.com/NixOS/nix/issues/12688) void pollFD(int fd, int events) { diff --git a/src/libutil/url.cc b/src/libutil/url.cc index eaa2b0682..b7286072d 100644 --- a/src/libutil/url.cc +++ b/src/libutil/url.cc @@ -70,9 +70,9 @@ std::string percentDecode(std::string_view in) return decoded; } -std::map decodeQuery(const std::string & query) +StringMap decodeQuery(const std::string & query) { - std::map result; + StringMap result; for (const auto & s : tokenizeString(query, "&")) { auto e = s.find('='); @@ -108,7 +108,7 @@ std::string percentEncode(std::string_view s, std::string_view keep) return res; } -std::string encodeQuery(const std::map & ss) +std::string encodeQuery(const StringMap & ss) { std::string res; bool first = true; diff --git a/src/libutil/windows/file-descriptor.cc b/src/libutil/windows/file-descriptor.cc index f451bc0d3..03d68232c 100644 --- a/src/libutil/windows/file-descriptor.cc +++ b/src/libutil/windows/file-descriptor.cc @@ -47,7 +47,7 @@ void writeFull(HANDLE handle, std::string_view s, bool allowInterrupts) if (allowInterrupts) checkInterrupt(); DWORD res; #if _WIN32_WINNT >= 0x0600 - auto path = handleToPath(handle); // debug; do it before becuase handleToPath changes lasterror + auto path = handleToPath(handle); // debug; do it before because handleToPath changes lasterror if (!WriteFile(handle, s.data(), s.size(), &res, NULL)) { throw WinError("writing to file %1%:%2%", handle, path); } diff --git a/src/libutil/windows/file-system.cc b/src/libutil/windows/file-system.cc index a73fa223a..f31c913f1 100644 --- a/src/libutil/windows/file-system.cc +++ b/src/libutil/windows/file-system.cc @@ -1,4 +1,5 @@ #include "nix/util/file-system.hh" +#include "nix/util/logging.hh" #ifdef _WIN32 namespace nix { diff --git a/src/libutil/windows/include/nix/util/meson.build b/src/libutil/windows/include/nix/util/meson.build index 1bd56c4bd..5d0ace929 100644 --- a/src/libutil/windows/include/nix/util/meson.build +++ b/src/libutil/windows/include/nix/util/meson.build @@ -6,4 +6,5 @@ headers += files( 'signals-impl.hh', 'windows-async-pipe.hh', 'windows-error.hh', + # hack for trailing newline ) diff --git a/src/nix-build/nix-build.cc b/src/nix-build/nix-build.cc index 3313c02aa..7e0b40252 100644 --- a/src/nix-build/nix-build.cc +++ b/src/nix-build/nix-build.cc @@ -387,8 +387,8 @@ static void main_nix_build(int argc, char * * argv) return false; } bool add = false; - if (v.type() == nFunction && v.payload.lambda.fun->hasFormals()) { - for (auto & i : v.payload.lambda.fun->formals->formals) { + if (v.type() == nFunction && v.lambda().fun->hasFormals()) { + for (auto & i : v.lambda().fun->formals->formals) { if (state->symbols[i.name] == "inNixShell") { add = true; break; @@ -420,15 +420,8 @@ static void main_nix_build(int argc, char * * argv) state->maybePrintStats(); auto buildPaths = [&](const std::vector & paths) { - /* Note: we do this even when !printMissing to efficiently - fetch binary cache data. */ - uint64_t downloadSize, narSize; - StorePathSet willBuild, willSubstitute, unknown; - store->queryMissing(paths, - willBuild, willSubstitute, unknown, downloadSize, narSize); - if (settings.printMissing) - printMissing(ref(store), willBuild, willSubstitute, unknown, downloadSize, narSize); + printMissing(ref(store), paths); if (!dryRun) store->buildPaths(paths, buildMode, evalStore); diff --git a/src/nix-channel/nix-channel.cc b/src/nix-channel/nix-channel.cc index 6699a2ac9..e6d2a89ad 100644 --- a/src/nix-channel/nix-channel.cc +++ b/src/nix-channel/nix-channel.cc @@ -4,9 +4,11 @@ #include "nix/store/filetransfer.hh" #include "nix/store/store-open.hh" #include "nix/cmd/legacy.hh" +#include "nix/cmd/common-eval-args.hh" #include "nix/expr/eval-settings.hh" // for defexpr #include "nix/util/users.hh" #include "nix/fetchers/tarball.hh" +#include "nix/fetchers/fetch-settings.hh" #include "self-exe.hh" #include "man-pages.hh" @@ -16,7 +18,7 @@ using namespace nix; -typedef std::map Channels; +typedef StringMap Channels; static Channels channels; static std::filesystem::path channelsList; @@ -114,7 +116,7 @@ static void update(const StringSet & channelNames) // We want to download the url to a file to see if it's a tarball while also checking if we // got redirected in the process, so that we can grab the various parts of a nix channel // definition from a consistent location if the redirect changes mid-download. - auto result = fetchers::downloadFile(store, url, std::string(baseNameOf(url))); + auto result = fetchers::downloadFile(store, fetchSettings, url, std::string(baseNameOf(url))); auto filename = store->toRealPath(result.storePath); url = result.effectiveUrl; @@ -128,9 +130,9 @@ static void update(const StringSet & channelNames) if (!unpacked) { // Download the channel tarball. try { - filename = store->toRealPath(fetchers::downloadFile(store, url + "/nixexprs.tar.xz", "nixexprs.tar.xz").storePath); + filename = store->toRealPath(fetchers::downloadFile(store, fetchSettings, url + "/nixexprs.tar.xz", "nixexprs.tar.xz").storePath); } catch (FileTransferError & e) { - filename = store->toRealPath(fetchers::downloadFile(store, url + "/nixexprs.tar.bz2", "nixexprs.tar.bz2").storePath); + filename = store->toRealPath(fetchers::downloadFile(store, fetchSettings, url + "/nixexprs.tar.bz2", "nixexprs.tar.bz2").storePath); } } // Regardless of where it came from, add the expression representing this channel to accumulated expression diff --git a/src/nix-env/nix-env.cc b/src/nix-env/nix-env.cc index 25ff39e38..fd48e67dc 100644 --- a/src/nix-env/nix-env.cc +++ b/src/nix-env/nix-env.cc @@ -1265,7 +1265,7 @@ static void opQuery(Globals & globals, Strings opFlags, Strings opArgs) } else if (v->type() == nList) { attrs2["type"] = "strings"; XMLOpenElement m(xml, "meta", attrs2); - for (auto elem : v->listItems()) { + for (auto elem : v->listView()) { if (elem->type() != nString) continue; XMLAttrs attrs3; attrs3["value"] = elem->c_str(); diff --git a/src/nix-store/nix-store.cc b/src/nix-store/nix-store.cc index 9acdf4554..3da7a8ac1 100644 --- a/src/nix-store/nix-store.cc +++ b/src/nix-store/nix-store.cc @@ -146,23 +146,19 @@ static void opRealise(Strings opFlags, Strings opArgs) for (auto & i : opArgs) paths.push_back(followLinksToStorePathWithOutputs(*store, i)); - uint64_t downloadSize, narSize; - StorePathSet willBuild, willSubstitute, unknown; - store->queryMissing( - toDerivedPaths(paths), - willBuild, willSubstitute, unknown, downloadSize, narSize); + auto missing = store->queryMissing(toDerivedPaths(paths)); /* Filter out unknown paths from `paths`. */ if (ignoreUnknown) { std::vector paths2; for (auto & i : paths) - if (!unknown.count(i.path)) paths2.push_back(i); + if (!missing.unknown.count(i.path)) paths2.push_back(i); paths = std::move(paths2); - unknown = StorePathSet(); + missing.unknown = StorePathSet(); } if (settings.printMissing) - printMissing(ref(store), willBuild, willSubstitute, unknown, downloadSize, narSize); + printMissing(ref(store), missing); if (dryRun) return; @@ -862,7 +858,7 @@ static void opServe(Strings opFlags, Strings opArgs) auto options = ServeProto::Serialise::read(*store, rconn); - // Only certain feilds get initialized based on the protocol + // Only certain fields get initialized based on the protocol // version. This is why not all the code below is unconditional. // See how the serialization logic in // `ServeProto::Serialise` matches diff --git a/src/nix/develop.cc b/src/nix/develop.cc index ec23d3212..b0818e50b 100644 --- a/src/nix/develop.cc +++ b/src/nix/develop.cc @@ -56,12 +56,12 @@ struct BuildEnvironment using Array = std::vector; - using Associative = std::map; + using Associative = StringMap; using Value = std::variant; std::map vars; - std::map bashFunctions; + StringMap bashFunctions; std::optional> structuredAttrs; static BuildEnvironment fromJSON(const nlohmann::json & json) diff --git a/src/nix/flake-prefetch-inputs.cc b/src/nix/flake-prefetch-inputs.cc index 1d4209d4d..9ee4b546e 100644 --- a/src/nix/flake-prefetch-inputs.cc +++ b/src/nix/flake-prefetch-inputs.cc @@ -48,7 +48,8 @@ struct CmdFlakePrefetchInputs : FlakeCommand Activity act(*logger, lvlInfo, actUnknown, fmt("fetching '%s'", lockedNode->lockedRef)); auto accessor = lockedNode->lockedRef.input.getAccessor(store).first; if (!evalSettings.lazyTrees) - fetchToStore(*store, accessor, FetchMode::Copy, lockedNode->lockedRef.input.getName()); + fetchToStore( + fetchSettings, *store, accessor, FetchMode::Copy, lockedNode->lockedRef.input.getName()); } catch (Error & e) { printError("%s", e.what()); nrFailed++; diff --git a/src/nix/flake.cc b/src/nix/flake.cc index 35e96e493..b2d03c28a 100644 --- a/src/nix/flake.cc +++ b/src/nix/flake.cc @@ -57,7 +57,7 @@ LockedFlake FlakeCommand::lockFlake() std::vector FlakeCommand::getFlakeRefsForCompletion() { return { - // Like getFlakeRef but with expandTilde calld first + // Like getFlakeRef but with expandTilde called first parseFlakeRef(fetchSettings, expandTilde(flakeUrl), std::filesystem::current_path().string()) }; } @@ -493,8 +493,8 @@ struct CmdFlakeCheck : FlakeCommand if (!v.isLambda()) { throw Error("overlay is not a function, but %s instead", showType(v)); } - if (v.payload.lambda.fun->hasFormals() - || !argHasName(v.payload.lambda.fun->arg, "final")) + if (v.lambda().fun->hasFormals() + || !argHasName(v.lambda().fun->arg, "final")) throw Error("overlay does not take an argument named 'final'"); // FIXME: if we have a 'nixpkgs' input, use it to // evaluate the overlay. @@ -1052,6 +1052,10 @@ struct CmdFlakeArchive : FlakeCommand, MixJSON, MixDryRun { std::string dstUri; + CheckSigsFlag checkSigs = CheckSigs; + + SubstituteFlag substitute = NoSubstitute; + CmdFlakeArchive() { addFlag({ @@ -1060,6 +1064,11 @@ struct CmdFlakeArchive : FlakeCommand, MixJSON, MixDryRun .labels = {"store-uri"}, .handler = {&dstUri}, }); + addFlag({ + .longName = "no-check-sigs", + .description = "Do not require that paths are signed by trusted keys.", + .handler = {&checkSigs, NoCheckSigs}, + }); } std::string description() override @@ -1126,7 +1135,8 @@ struct CmdFlakeArchive : FlakeCommand, MixJSON, MixDryRun if (!dryRun && !dstUri.empty()) { ref dstStore = dstUri.empty() ? openStore() : openStore(dstUri); - copyPaths(*store, *dstStore, sources); + + copyPaths(*store, *dstStore, sources, NoRepair, checkSigs, substitute); } } }; @@ -1492,7 +1502,7 @@ struct CmdFlakePrefetch : FlakeCommand, MixJSON auto originalRef = getFlakeRef(); auto resolvedRef = originalRef.resolve(store); auto [accessor, lockedRef] = resolvedRef.lazyFetch(store); - auto storePath = fetchToStore(*store, accessor, FetchMode::Copy, lockedRef.input.getName()); + auto storePath = fetchToStore(getEvalState()->fetchSettings, *store, accessor, FetchMode::Copy, lockedRef.input.getName()); auto hash = store->queryPathInfo(storePath)->narHash; if (json) { diff --git a/src/nix/formatter-run.md b/src/nix/formatter-run.md index db8583c95..201cae92e 100644 --- a/src/nix/formatter-run.md +++ b/src/nix/formatter-run.md @@ -8,6 +8,10 @@ Flags can be forwarded to the formatter by using `--` followed by the flags. Any arguments will be forwarded to the formatter. Typically these are the files to format. +The environment variable `PRJ_ROOT` (according to [prj-spec](https://github.com/numtide/prj-spec)) +will be set to the absolute path to the directory containing the closest parent `flake.nix` +relative to the current directory. + # Example diff --git a/src/nix/formatter.cc b/src/nix/formatter.cc index 8b171b244..212bb8d70 100644 --- a/src/nix/formatter.cc +++ b/src/nix/formatter.cc @@ -1,8 +1,10 @@ #include "nix/cmd/command.hh" +#include "nix/cmd/installable-flake.hh" #include "nix/cmd/installable-value.hh" #include "nix/expr/eval.hh" #include "nix/store/local-fs-store.hh" #include "nix/cmd/installable-derived-path.hh" +#include "nix/util/environment-variables.hh" #include "run.hh" using namespace nix; @@ -72,10 +74,14 @@ struct CmdFormatterRun : MixFormatter, MixJSON auto evalState = getEvalState(); auto evalStore = getEvalStore(); - auto installable_ = parseInstallable(store, "."); + auto installable_ = parseInstallable(store, ".").cast(); auto & installable = InstallableValue::require(*installable_); auto app = installable.toApp(*evalState).resolve(evalStore, store); + auto maybeFlakeDir = installable_->flakeRef.input.getSourcePath(); + assert(maybeFlakeDir.has_value()); + auto flakeDir = maybeFlakeDir.value(); + Strings programArgs{app.program}; // Propagate arguments from the CLI @@ -83,11 +89,22 @@ struct CmdFormatterRun : MixFormatter, MixJSON programArgs.push_back(i); } + // Add the path to the flake as an environment variable. This enables formatters to format the entire flake even + // if run from a subdirectory. + StringMap env = getEnv(); + env["PRJ_ROOT"] = flakeDir.string(); + // Release our references to eval caches to ensure they are persisted to disk, because // we are about to exec out of this process without running C++ destructors. evalState->evalCaches.clear(); - execProgramInStore(store, UseLookupPath::DontUse, app.program, programArgs); + execProgramInStore( + store, + UseLookupPath::DontUse, + app.program, + programArgs, + std::nullopt, // Use default system + env); }; }; diff --git a/src/nix/main.cc b/src/nix/main.cc index b000b9916..1c1ba95c7 100644 --- a/src/nix/main.cc +++ b/src/nix/main.cc @@ -38,7 +38,7 @@ #endif #ifdef __linux__ -# include "nix/util/namespaces.hh" +# include "nix/util/linux-namespaces.hh" #endif #ifndef _WIN32 diff --git a/src/nix/meson.build b/src/nix/meson.build index 0273b6f51..91c8261e4 100644 --- a/src/nix/meson.build +++ b/src/nix/meson.build @@ -227,7 +227,7 @@ foreach linkname : nix_symlinks # TODO(Ericson2314): Don't do this once we have the `meson.override_find_program` working) build_by_default: true ) - # TODO(Ericson3214): Dosen't yet work + # TODO(Ericson3214): Doesn't yet work #meson.override_find_program(linkname, t) endforeach @@ -247,7 +247,7 @@ custom_target( # TODO(Ericson2314): Don't do this once we have the `meson.override_find_program` working) build_by_default: true ) -# TODO(Ericson3214): Dosen't yet work +# TODO(Ericson3214): Doesn't yet work #meson.override_find_program(linkname, t) localstatedir = nix_store.get_variable( diff --git a/src/nix/prefetch.cc b/src/nix/prefetch.cc index 9e5e3c09a..96dcdb4e8 100644 --- a/src/nix/prefetch.cc +++ b/src/nix/prefetch.cc @@ -46,7 +46,7 @@ std::string resolveMirrorUrl(EvalState & state, const std::string & url) if (mirrorList->value->listSize() < 1) throw Error("mirror URL '%s' did not expand to anything", url); - std::string mirror(state.forceString(*mirrorList->value->listElems()[0], noPos, "while evaluating the first available mirror")); + std::string mirror(state.forceString(*mirrorList->value->listView()[0], noPos, "while evaluating the first available mirror")); return mirror + (hasSuffix(mirror, "/") ? "" : "/") + s.substr(p + 1); } @@ -221,7 +221,7 @@ static int main_nix_prefetch_url(int argc, char * * argv) state->forceList(*attr->value, noPos, "while evaluating the urls to prefetch"); if (attr->value->listSize() < 1) throw Error("'urls' list is empty"); - url = state->forceString(*attr->value->listElems()[0], noPos, "while evaluating the first url from the urls list"); + url = state->forceString(*attr->value->listView()[0], noPos, "while evaluating the first url from the urls list"); /* Extract the hash mode. */ auto attr2 = v.attrs()->get(state->symbols.create("outputHashMode")); diff --git a/src/nix/profile-add.md b/src/nix/profile-add.md index 0bb65d8e6..f1d5391ef 100644 --- a/src/nix/profile-add.md +++ b/src/nix/profile-add.md @@ -32,6 +32,6 @@ This command adds [_installables_](./nix.md#installables) to a Nix profile. > **Note** > -> `nix profile install` is an alias for `nix profile add` in Determinate Nix. +> `nix profile install` is an alias for `nix profile add`. )"" diff --git a/src/nix/run.cc b/src/nix/run.cc index 0473c99b7..3dae8ebc9 100644 --- a/src/nix/run.cc +++ b/src/nix/run.cc @@ -10,6 +10,7 @@ #include "nix/util/finally.hh" #include "nix/util/source-accessor.hh" #include "nix/expr/eval.hh" +#include "nix/util/util.hh" #include #ifdef __linux__ @@ -19,6 +20,8 @@ #include +extern char ** environ __attribute__((weak)); + namespace nix::fs { using namespace std::filesystem; } using namespace nix; @@ -27,14 +30,37 @@ std::string chrootHelperName = "__run_in_chroot"; namespace nix { +/* Convert `env` to a list of strings suitable for `execve`'s `envp` argument. */ +Strings toEnvp(StringMap env) +{ + Strings envStrs; + for (auto & i : env) { + envStrs.push_back(i.first + "=" + i.second); + } + + return envStrs; +} + void execProgramInStore(ref store, UseLookupPath useLookupPath, const std::string & program, const Strings & args, - std::optional system) + std::optional system, + std::optional env) { logger->stop(); + char **envp; + Strings envStrs; + std::vector envCharPtrs; + if (env.has_value()) { + envStrs = toEnvp(env.value()); + envCharPtrs = stringsToCharPtrs(envStrs); + envp = envCharPtrs.data(); + } else { + envp = environ; + } + restoreProcessContext(); /* If this is a diverted store (i.e. its "logical" location @@ -54,7 +80,7 @@ void execProgramInStore(ref store, Strings helperArgs = { chrootHelperName, store->storeDir, store2->getRealStoreDir(), std::string(system.value_or("")), program }; for (auto & arg : args) helperArgs.push_back(arg); - execv(getSelfExe().value_or("nix").c_str(), stringsToCharPtrs(helperArgs).data()); + execve(getSelfExe().value_or("nix").c_str(), stringsToCharPtrs(helperArgs).data(), envp); throw SysError("could not execute chroot helper"); } @@ -64,10 +90,12 @@ void execProgramInStore(ref store, linux::setPersonality(*system); #endif - if (useLookupPath == UseLookupPath::Use) + if (useLookupPath == UseLookupPath::Use) { + // We have to set `environ` by hand because there is no `execvpe` on macOS. + environ = envp; execvp(program.c_str(), stringsToCharPtrs(args).data()); - else - execv(program.c_str(), stringsToCharPtrs(args).data()); + } else + execve(program.c_str(), stringsToCharPtrs(args).data(), envp); throw SysError("unable to execute '%s'", program); } diff --git a/src/nix/run.hh b/src/nix/run.hh index 9d95b8e7c..5367c515c 100644 --- a/src/nix/run.hh +++ b/src/nix/run.hh @@ -14,6 +14,7 @@ void execProgramInStore(ref store, UseLookupPath useLookupPath, const std::string & program, const Strings & args, - std::optional system = std::nullopt); + std::optional system = std::nullopt, + std::optional env = std::nullopt); } diff --git a/src/nix/unix/daemon.cc b/src/nix/unix/daemon.cc index 115a0a1e9..a14632c2f 100644 --- a/src/nix/unix/daemon.cc +++ b/src/nix/unix/daemon.cc @@ -481,7 +481,7 @@ static void processStdioConnection(ref store, TrustedFlag trustClient) * @param forceTrustClientOpt See `daemonLoop()` and the parameter with * the same name over there for details. * - * @param procesOps Whether to force processing ops even if the next + * @param processOps Whether to force processing ops even if the next * store also is a remote store and could process it directly. */ static void runDaemon(bool stdio, std::optional forceTrustClientOpt, bool processOps) diff --git a/src/perl/t/meson.build b/src/perl/t/meson.build index dbd1139f3..5e75920ac 100644 --- a/src/perl/t/meson.build +++ b/src/perl/t/meson.build @@ -7,6 +7,7 @@ nix_perl_tests = files( 'init.t', + # hack for trailing newline ) diff --git a/tests/functional/binary-cache.sh b/tests/functional/binary-cache.sh index ff39ab3b7..2c102df07 100755 --- a/tests/functional/binary-cache.sh +++ b/tests/functional/binary-cache.sh @@ -151,8 +151,11 @@ nix-build --substituters "file://$cacheDir" --no-require-sigs dependencies.nix - grepQuiet "don't know how to build" "$TEST_ROOT/log" grepQuiet "building.*input-1" "$TEST_ROOT/log" grepQuiet "building.*input-2" "$TEST_ROOT/log" -grepQuiet "copying path.*input-0" "$TEST_ROOT/log" -grepQuiet "copying path.*top" "$TEST_ROOT/log" + +# Removed for now since 299141ecbd08bae17013226dbeae71e842b4fdd7 / issue #77 is reverted + +#grepQuiet "copying path.*input-0" "$TEST_ROOT/log" +#grepQuiet "copying path.*top" "$TEST_ROOT/log" # Create a signed binary cache. diff --git a/tests/functional/build-remote-trustless-should-fail-0.sh b/tests/functional/build-remote-trustless-should-fail-0.sh index 3401de1b0..e79527d72 100755 --- a/tests/functional/build-remote-trustless-should-fail-0.sh +++ b/tests/functional/build-remote-trustless-should-fail-0.sh @@ -12,7 +12,6 @@ requiresUnprivilegedUserNamespaces [[ $busybox =~ busybox ]] || skipTest "no busybox" unset NIX_STORE_DIR -unset NIX_STATE_DIR # We first build a dependency of the derivation we eventually want to # build. diff --git a/tests/functional/build-remote-trustless.sh b/tests/functional/build-remote-trustless.sh index 9f91a91a9..6014b57bb 100644 --- a/tests/functional/build-remote-trustless.sh +++ b/tests/functional/build-remote-trustless.sh @@ -9,7 +9,6 @@ requiresUnprivilegedUserNamespaces [[ "$busybox" =~ busybox ]] || skipTest "no busybox" unset NIX_STORE_DIR -unset NIX_STATE_DIR remoteDir=$TEST_ROOT/remote diff --git a/tests/functional/build-remote.sh b/tests/functional/build-remote.sh index 765cd71b4..f396bc72e 100644 --- a/tests/functional/build-remote.sh +++ b/tests/functional/build-remote.sh @@ -8,7 +8,6 @@ requiresUnprivilegedUserNamespaces # Avoid store dir being inside sandbox build-dir unset NIX_STORE_DIR -unset NIX_STATE_DIR function join_by { local d=$1; shift; echo -n "$1"; shift; printf "%s" "${@/#/$d}"; } diff --git a/tests/functional/check.sh b/tests/functional/check.sh index b21349288..a1c6decf5 100755 --- a/tests/functional/check.sh +++ b/tests/functional/check.sh @@ -22,6 +22,11 @@ clearStore nix-build dependencies.nix --no-out-link nix-build dependencies.nix --no-out-link --check +# Make sure checking just one output works (#13293) +nix-build multiple-outputs.nix -A a --no-out-link +nix-store --delete "$(nix-build multiple-outputs.nix -A a.second --no-out-link)" +nix-build multiple-outputs.nix -A a.first --no-out-link --check + # Build failure exit codes (100, 104, etc.) are from # doc/manual/source/command-ref/status-build-failure.md diff --git a/tests/functional/dyn-drv/build-built-drv.sh b/tests/functional/dyn-drv/build-built-drv.sh index 647be9457..49d61c6ce 100644 --- a/tests/functional/dyn-drv/build-built-drv.sh +++ b/tests/functional/dyn-drv/build-built-drv.sh @@ -18,4 +18,9 @@ clearStore drvDep=$(nix-instantiate ./text-hashed-output.nix -A producingDrv) -expectStderr 1 nix build "${drvDep}^out^out" --no-link | grepQuiet "Building dynamic derivations in one shot is not yet implemented" +# Store layer needs bugfix +requireDaemonNewerThan "2.30pre20250515" + +out2=$(nix build "${drvDep}^out^out" --no-link) + +test $out1 == $out2 diff --git a/tests/functional/dyn-drv/dep-built-drv-2.sh b/tests/functional/dyn-drv/dep-built-drv-2.sh index 3247720af..0e4cc7c12 100644 --- a/tests/functional/dyn-drv/dep-built-drv-2.sh +++ b/tests/functional/dyn-drv/dep-built-drv-2.sh @@ -3,7 +3,7 @@ source common.sh # Store layer needs bugfix -requireDaemonNewerThan "2.27pre20250205" +requireDaemonNewerThan "2.30pre20250515" TODO_NixOS # can't enable a sandbox feature easily @@ -13,4 +13,4 @@ restartDaemon NIX_BIN_DIR="$(dirname "$(type -p nix)")" export NIX_BIN_DIR -expectStderr 1 nix build -L --file ./non-trivial.nix --no-link | grepQuiet "Building dynamic derivations in one shot is not yet implemented" +nix build -L --file ./non-trivial.nix --no-link diff --git a/tests/functional/dyn-drv/dep-built-drv.sh b/tests/functional/dyn-drv/dep-built-drv.sh index 4f6e9b080..e9a8b6b83 100644 --- a/tests/functional/dyn-drv/dep-built-drv.sh +++ b/tests/functional/dyn-drv/dep-built-drv.sh @@ -4,8 +4,11 @@ source common.sh out1=$(nix-build ./text-hashed-output.nix -A hello --no-out-link) +# Store layer needs bugfix +requireDaemonNewerThan "2.30pre20250515" + clearStore -expectStderr 1 nix-build ./text-hashed-output.nix -A wrapper --no-out-link | grepQuiet "Building dynamic derivations in one shot is not yet implemented" +out2=$(nix-build ./text-hashed-output.nix -A wrapper --no-out-link) -# diff -r $out1 $out2 +diff -r $out1 $out2 diff --git a/tests/functional/dyn-drv/failing-outer.sh b/tests/functional/dyn-drv/failing-outer.sh index fbad70701..3feda74fb 100644 --- a/tests/functional/dyn-drv/failing-outer.sh +++ b/tests/functional/dyn-drv/failing-outer.sh @@ -3,9 +3,7 @@ source common.sh # Store layer needs bugfix -requireDaemonNewerThan "2.27pre20250205" - -skipTest "dyn drv input scheduling had to be reverted for 2.27" +requireDaemonNewerThan "2.30pre20250515" expected=100 if [[ -v NIX_DAEMON_PACKAGE ]]; then expected=1; fi # work around the daemon not returning a 100 status correctly diff --git a/tests/functional/fetchGit.sh b/tests/functional/fetchGit.sh index 219c4f0da..a41aa35c0 100755 --- a/tests/functional/fetchGit.sh +++ b/tests/functional/fetchGit.sh @@ -12,7 +12,7 @@ repo=$TEST_ROOT/./git export _NIX_FORCE_HTTP=1 -rm -rf $repo ${repo}-tmp $TEST_HOME/.cache/nix $TEST_ROOT/worktree $TEST_ROOT/shallow $TEST_ROOT/minimal +rm -rf $repo ${repo}-tmp $TEST_HOME/.cache/nix $TEST_ROOT/worktree $TEST_ROOT/minimal git init $repo git -C $repo config user.email "foobar@example.com" @@ -216,18 +216,6 @@ git -C $TEST_ROOT/minimal fetch $repo $rev2 git -C $TEST_ROOT/minimal checkout $rev2 [[ $(nix eval --impure --raw --expr "(builtins.fetchGit { url = $TEST_ROOT/minimal; }).rev") = $rev2 ]] -# Fetching a shallow repo shouldn't work by default, because we can't -# return a revCount. -git clone --depth 1 file://$repo $TEST_ROOT/shallow -(! nix eval --impure --raw --expr "(builtins.fetchGit { url = $TEST_ROOT/shallow; ref = \"dev\"; }).outPath") - -# But you can request a shallow clone, which won't return a revCount. -path6=$(nix eval --impure --raw --expr "(builtins.fetchTree { type = \"git\"; url = \"file://$TEST_ROOT/shallow\"; ref = \"dev\"; shallow = true; }).outPath") -[[ $path3 = $path6 ]] -[[ $(nix eval --impure --expr "(builtins.fetchTree { type = \"git\"; url = \"file://$TEST_ROOT/shallow\"; ref = \"dev\"; shallow = true; }).revCount or 123") == 123 ]] - -expectStderr 1 nix eval --expr 'builtins.fetchTree { type = "git"; url = "file:///foo"; }' | grepQuiet "'fetchTree' doesn't fetch unlocked input" - # Explicit ref = "HEAD" should work, and produce the same outPath as without ref path7=$(nix eval --impure --raw --expr "(builtins.fetchGit { url = \"file://$repo\"; ref = \"HEAD\"; }).outPath") path8=$(nix eval --impure --raw --expr "(builtins.fetchGit { url = \"file://$repo\"; }).outPath") @@ -292,17 +280,20 @@ path11=$(nix eval --impure --raw --expr "(builtins.fetchGit ./.).outPath") empty="$TEST_ROOT/empty" git init "$empty" -emptyAttrs='{ lastModified = 0; lastModifiedDate = "19700101000000"; narHash = "sha256-pQpattmS9VmO3ZIQUFn66az8GSmB4IvYhTTCFn6SUmo="; rev = "0000000000000000000000000000000000000000"; revCount = 0; shortRev = "0000000"; submodules = false; }' - -[[ $(nix eval --impure --expr "builtins.removeAttrs (builtins.fetchGit $empty) [\"outPath\"]") = $emptyAttrs ]] +emptyAttrs="{ lastModified = 0; lastModifiedDate = \"19700101000000\"; narHash = \"sha256-pQpattmS9VmO3ZIQUFn66az8GSmB4IvYhTTCFn6SUmo=\"; rev = \"0000000000000000000000000000000000000000\"; revCount = 0; shortRev = \"0000000\"; submodules = false; }" +result=$(nix eval --impure --expr "builtins.removeAttrs (builtins.fetchGit $empty) [\"outPath\"]") +[[ "$result" = "$emptyAttrs" ]] echo foo > "$empty/x" -[[ $(nix eval --impure --expr "builtins.removeAttrs (builtins.fetchGit $empty) [\"outPath\"]") = $emptyAttrs ]] +result=$(nix eval --impure --expr "builtins.removeAttrs (builtins.fetchGit $empty) [\"outPath\"]") +[[ "$result" = "$emptyAttrs" ]] git -C "$empty" add x -[[ $(nix eval --impure --expr "builtins.removeAttrs (builtins.fetchGit $empty) [\"outPath\"]") = '{ lastModified = 0; lastModifiedDate = "19700101000000"; narHash = "sha256-wzlAGjxKxpaWdqVhlq55q5Gxo4Bf860+kLeEa/v02As="; rev = "0000000000000000000000000000000000000000"; revCount = 0; shortRev = "0000000"; submodules = false; }' ]] +expected_attrs="{ lastModified = 0; lastModifiedDate = \"19700101000000\"; narHash = \"sha256-wzlAGjxKxpaWdqVhlq55q5Gxo4Bf860+kLeEa/v02As=\"; rev = \"0000000000000000000000000000000000000000\"; revCount = 0; shortRev = \"0000000\"; submodules = false; }" +result=$(nix eval --impure --expr "builtins.removeAttrs (builtins.fetchGit $empty) [\"outPath\"]") +[[ "$result" = "$expected_attrs" ]] # Test a repo with an empty commit. git -C "$empty" rm -f x diff --git a/tests/functional/fetchGitShallow.sh b/tests/functional/fetchGitShallow.sh new file mode 100644 index 000000000..4c21bd7ac --- /dev/null +++ b/tests/functional/fetchGitShallow.sh @@ -0,0 +1,67 @@ +#!/usr/bin/env bash + +# shellcheck source=common.sh +source common.sh + +requireGit + +# Create a test repo with multiple commits for all our tests +git init "$TEST_ROOT/shallow-parent" +git -C "$TEST_ROOT/shallow-parent" config user.email "foobar@example.com" +git -C "$TEST_ROOT/shallow-parent" config user.name "Foobar" + +# Add several commits to have history +echo "{ outputs = _: {}; }" > "$TEST_ROOT/shallow-parent/flake.nix" +echo "" > "$TEST_ROOT/shallow-parent/file.txt" +git -C "$TEST_ROOT/shallow-parent" add file.txt flake.nix +git -C "$TEST_ROOT/shallow-parent" commit -m "First commit" + +echo "second" > "$TEST_ROOT/shallow-parent/file.txt" +git -C "$TEST_ROOT/shallow-parent" commit -m "Second commit" -a + +echo "third" > "$TEST_ROOT/shallow-parent/file.txt" +git -C "$TEST_ROOT/shallow-parent" commit -m "Third commit" -a + +# Add a branch for testing ref fetching +git -C "$TEST_ROOT/shallow-parent" checkout -b dev +echo "branch content" > "$TEST_ROOT/shallow-parent/branch-file.txt" +git -C "$TEST_ROOT/shallow-parent" add branch-file.txt +git -C "$TEST_ROOT/shallow-parent" commit -m "Branch commit" + +# Make a shallow clone (depth=1) +git clone --depth 1 "file://$TEST_ROOT/shallow-parent" "$TEST_ROOT/shallow-clone" + +# Test 1: Fetching a shallow repo shouldn't work by default, because we can't +# return a revCount. +(! nix eval --impure --raw --expr "(builtins.fetchGit { url = \"$TEST_ROOT/shallow-clone\"; ref = \"dev\"; }).outPath") + +# Test 2: But you can request a shallow clone, which won't return a revCount. +path=$(nix eval --impure --raw --expr "(builtins.fetchTree { type = \"git\"; url = \"file://$TEST_ROOT/shallow-clone\"; ref = \"dev\"; shallow = true; }).outPath") +# Verify file from dev branch exists +[[ -f "$path/branch-file.txt" ]] +# Verify revCount is missing +[[ $(nix eval --impure --expr "(builtins.fetchTree { type = \"git\"; url = \"file://$TEST_ROOT/shallow-clone\"; ref = \"dev\"; shallow = true; }).revCount or 123") == 123 ]] + +# Test 3: Check unlocked input error message +expectStderr 1 nix eval --expr 'builtins.fetchTree { type = "git"; url = "file:///foo"; }' | grepQuiet "'fetchTree' doesn't fetch unlocked input" + +# Test 4: Regression test for revCount in worktrees derived from shallow clones +# Add a worktree to the shallow clone +git -C "$TEST_ROOT/shallow-clone" worktree add "$TEST_ROOT/shallow-worktree" + +# Prior to the fix, this would error out because of the shallow clone's +# inability to find parent commits. Now it should return an error. +if nix eval --impure --expr "(builtins.fetchGit { url = \"file://$TEST_ROOT/shallow-worktree\"; }).revCount" 2>/dev/null; then + echo "fetchGit unexpectedly succeeded on shallow clone" >&2 + exit 1 +fi + +# Also verify that fetchTree fails similarly +if nix eval --impure --expr "(builtins.fetchTree { type = \"git\"; url = \"file://$TEST_ROOT/shallow-worktree\"; }).revCount" 2>/dev/null; then + echo "fetchTree unexpectedly succeeded on shallow clone" >&2 + exit 1 +fi + +# Verify that we can shallow fetch the worktree +git -C "$TEST_ROOT/shallow-worktree" rev-list --count HEAD >/dev/null +nix eval --impure --raw --expr "(builtins.fetchGit { url = \"file://$TEST_ROOT/shallow-worktree\"; shallow = true; }).rev" diff --git a/tests/functional/flakes/non-flake-inputs.sh b/tests/functional/flakes/non-flake-inputs.sh index 7e55aca20..6b1c6a941 100644 --- a/tests/functional/flakes/non-flake-inputs.sh +++ b/tests/functional/flakes/non-flake-inputs.sh @@ -37,11 +37,20 @@ cat > "$flake3Dir/flake.nix" <flake.nix <nested-flake1/flake.nix <nested-flake1/nested-flake2/flake.nix < flake.nix } EOF +mkdir subflake +cp ./simple.nix ./simple.builder.sh ./formatter.simple.sh "${config_nix}" "$TEST_HOME/subflake" + +cat << EOF > subflake/flake.nix +{ + outputs = _: { + formatter.$system = + with import ./config.nix; + mkDerivation { + name = "formatter"; + buildCommand = '' + mkdir -p \$out/bin + echo "#! ${shell}" > \$out/bin/formatter + cat \${./formatter.simple.sh} >> \$out/bin/formatter + chmod +x \$out/bin/formatter + ''; + }; + }; +} +EOF + # No arguments check -[[ "$(nix fmt)" = "Formatting(0):" ]] -[[ "$(nix formatter run)" = "Formatting(0):" ]] +[[ "$(nix fmt)" = "PRJ_ROOT=$TEST_HOME Formatting(0):" ]] +[[ "$(nix formatter run)" = "PRJ_ROOT=$TEST_HOME Formatting(0):" ]] # Argument forwarding check -nix fmt ./file ./folder | grep 'Formatting(2): ./file ./folder' -nix formatter run ./file ./folder | grep 'Formatting(2): ./file ./folder' +nix fmt ./file ./folder | grep "PRJ_ROOT=$TEST_HOME Formatting(2): ./file ./folder" +nix formatter run ./file ./folder | grep "PRJ_ROOT=$TEST_HOME Formatting(2): ./file ./folder" + +# test subflake +cd subflake +nix fmt ./file | grep "PRJ_ROOT=$TEST_HOME/subflake Formatting(1): ./file" # Build checks ## Defaults to a ./result. diff --git a/tests/functional/formatter.simple.sh b/tests/functional/formatter.simple.sh index e53f6c9be..355ff00ef 100755 --- a/tests/functional/formatter.simple.sh +++ b/tests/functional/formatter.simple.sh @@ -1,2 +1,2 @@ #!/usr/bin/env bash -echo "Formatting(${#}):" "${@}" +echo "PRJ_ROOT=$PRJ_ROOT Formatting(${#}):" "${@}" diff --git a/tests/functional/lang/eval-okay-regex-match2.exp b/tests/functional/lang/eval-okay-regex-match2.exp new file mode 100644 index 000000000..b7fb4e05e --- /dev/null +++ b/tests/functional/lang/eval-okay-regex-match2.exp @@ -0,0 +1 @@ +[ null null null null null null null null null null [ ] [ ] null null null null [ "gnu" "m4/m4-1.4.19.tar.bz2" ] null [ "cpan" "src/5.0/perl-5.40.0.tar.gz" ] null null null [ "10" "" ] [ "11" "" ] [ "36" ] null [ "exec" ] [ ] null [ "26" ] null [ "26" ] null [ ] null null null null null [ "meson.patch?h=mingw-w64-xorgproto&id=7b817efc3144a50e6766817c4ca7242f8ce49307" ] null null [ "xmlto" ] null null [ "exec" ] null null [ ] [ ] null null [ "coconutbattery-4.0.2,152" ] [ "12" "0" ] [ "12" "8" ] [ "8" "9" "5" "30" ] [ "9" "7" "1" "26" ] null null [ ] [ ] null [ ] null [ ] null null [ null null "draupnir" ] [ ] [ ] null null [ null null "renderer" ] [ ] [ ] [ null ] null [ null ] null null null [ ] [ ] [ "p" ] [ "p" ] [ "systemtap" ] null [ ] null null [ ] null [ "20220722-71c783507536-b7eae18423ef" ] [ "20220726-bac6d66b5ca1-5b966f2f136c" ] [ ] [ "0.3.2308" ] null null null [ "17.0.14+" "7" ] null [ null ] [ null ] null null [ "21.0.7+" "6" ] null null [ ] [ ] [ "8u442" "06" ] [ ] [ "jna" "5.6.0" null null ] [ "jna" "5.6.0" null null ] [ ] [ ] null [ "2" ] null [ ] [ ] null null [ ] [ ] null null [ ] null [ ] [ "https://github.com/GRA0007/google-cloud-rs.git" null null null "4a2db92efd57a896e14d18877458c6ae43418aec" ] [ "https://github.com/GRA0007/google-cloud-rs.git" null null null "4a2db92efd57a896e14d18877458c6ae43418aec" ] null [ ] null [ "rejeep" "ansi.el" ] null [ "rejeep" "commander.el" ] null [ "2.2.4" "20231021.200112" "6" ] [ "2.2.4" "20231021.200112" "6" ] [ ] [ ] null null [ "" ] [ "" ".git" ] [ "" "\\.git" ] null null null [ "" "__pycache__" ] [ "" "__pycache__" ] null null null [ "" ] null null [ ] [ ] [ ] [ "8u442" "06" ] [ ] [ ] [ "simulator" ] null null null null null null [ "notify-send" ] [ "playlistmanager" ] [ ] [ ] null null null null [ "name" ] [ "name" ] null null null null [ "pypy" "27" ] [ "pypy" "310" ] [ "refs/heads/master" ] null [ "refs/heads/master" ] null null [ ] null null null null [ ] [ ] [ ] [ ] [ "b7eae18423ef" ] [ "20220726-bac6d66b5ca1-5b966f2f136c" ] null null [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ "" ] [ ] ] diff --git a/tests/functional/lang/eval-okay-regex-match2.nix b/tests/functional/lang/eval-okay-regex-match2.nix new file mode 100644 index 000000000..31a94423d --- /dev/null +++ b/tests/functional/lang/eval-okay-regex-match2.nix @@ -0,0 +1,938 @@ +# Derived from nixpkgs f870c6ccc8951fc48aeb293cf3e98ade6ac42668 by instrumenting +# builtins.match to collect at most 2 non-matching and 2 matching cases of every +# regex used when running: +# `nix-env --query --available --out-path --eval-system x86_64-linux`. + +builtins.map + ( + list: + let + re = builtins.head list; + str = builtins.elemAt list 1; + in + builtins.match re str + ) + [ + [ + ''(.*)e?abi.*'' + ''linux'' + ] + [ + ''(.*)e?abi.*'' + ''linux'' + ] + [ + ''.*-none.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''.*nvptx.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''.*switch.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''.*-uefi.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''.*-none.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''.*nvptx.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''.*switch.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''.*-uefi.*'' + ''x86_64-unknown-linux-gnu'' + ] + [ + ''[[:alnum:]+_?=-][[:alnum:]+._?=-]*'' + ''bootstrap-stage0-glibc-bootstrapFiles'' + ] + [ + ''[[:alnum:]+_?=-][[:alnum:]+._?=-]*'' + ''glibc-2.40-66'' + ] + [ + ''mirror://([a-z]+)/(.*)'' + ''https://github.com/madler/zlib/releases/download/v1.3.1/zlib-1.3.1.tar.gz'' + ] + [ + ''mirror://([a-z]+)/(.*)'' + ''https://git.savannah.gnu.org/cgit/config.git/plain/config.guess?id=948ae97ca5703224bd3eada06b7a69f40dd15a02'' + ] + [ + ''.*/.*'' + ''mktemp'' + ] + [ + ''.*/.*'' + ''rm'' + ] + [ + ''mirror://([a-z]+)/(.*)'' + ''mirror://gnu/m4/m4-1.4.19.tar.bz2'' + ] + [ + ''5\.[0-9]*[13579]\..+'' + ''5.40.0'' + ] + [ + ''mirror://([a-z]+)/(.*)'' + ''mirror://cpan/src/5.0/perl-5.40.0.tar.gz'' + ] + [ + ''5\.[0-9]*[13579]\..+'' + ''5.40.0'' + ] + [ + ''^([0-9][0-9\.]*)(.*)$'' + ''addons'' + ] + [ + ''^([0-9][0-9\.]*)(.*)$'' + ''extras'' + ] + [ + ''^([0-9][0-9\.]*)(.*)$'' + ''10'' + ] + [ + ''^([0-9][0-9\.]*)(.*)$'' + ''11'' + ] + [ + ''[[:space:]]*0*(-?[[:digit:]]+)[[:space:]]*'' + ''36'' + ] + [ + ''0+'' + ''36'' + ] + [ + ''/bin/([^/]+)'' + ''/bin/exec'' + ] + [ + ''[[:alnum:],._+:@%/-]+'' + ''/bin/exec'' + ] + [ + ''[[:alnum:],._+:@%/-]+'' + '''' + ] + [ + ''[[:space:]]*(-?[[:digit:]]+)[[:space:]]*'' + ''26'' + ] + [ + ''0[[:digit:]]+'' + ''26'' + ] + [ + ''[[:space:]]*(-?[[:digit:]]+)[[:space:]]*'' + ''26'' + ] + [ + ''0[[:digit:]]+'' + ''26'' + ] + [ + ''[[:alnum:],._+:@%/-]+'' + ''@tcl@'' + ] + [ + ''[[:alnum:],._+:@%/-]+'' + ''@[a-zA-Z_][0-9A-Za-z_'-]*@'' + ] + [ + ''.*pypy.*'' + ''/nix/store/8w718rm43x7z73xhw9d6vh8s4snrq67h-python3-3.12.10/bin/python3.12'' + ] + [ + ''(.*/)?\.\.(/.*)?'' + ''package.nix'' + ] + [ + ''/bin/([^/]+)'' + '''' + ] + [ + ''[[:alnum:]+_?=-][[:alnum:]+._?=-]*'' + ''meson.patch?h=mingw-w64-xorgproto&id=7b817efc3144a50e6766817c4ca7242f8ce49307'' + ] + [ + ''\.*(.*)'' + ''meson.patch?h=mingw-w64-xorgproto&id=7b817efc3144a50e6766817c4ca7242f8ce49307'' + ] + [ + ''/bin/([^/]+)'' + '''' + ] + [ + ''.*-rc.*'' + ''2.49.0'' + ] + [ + ''(.*)\.git'' + ''xmlto.git'' + ] + [ + ''[a-f0-9]*'' + ''0.0.29'' + ] + [ + ''.*-rc.*'' + ''2.49.0'' + ] + [ + ''/bin/([^/]+)'' + ''/bin/exec'' + ] + [ + ''.*-polly.*'' + ''/nix/store/0yxfdnfxbzczjxhgdpac81jnas194wfj-gnu-install-dirs.patch'' + ] + [ + ''.*-polly.*'' + ''/nix/store/jh2pda7psaasq85b2rrigmkjdbl8d0a1-llvm-lit-cfg-add-libs-to-dylib-path.patch'' + ] + [ + ''.*-polly.*'' + ''/nix/store/x868j4ih7wqiivf6wr9m4g424jav0hpq-gnu-install-dirs-polly.patch'' + ] + [ + ''.*-polly.*'' + ''/nix/store/gr73nf6sca9nyzl88x58y3qxrav04yhd-polly-lit-cfg-add-libs-to-dylib-path.patch'' + ] + [ + ''(.*/)?\.\.(/.*)?'' + ''package.nix'' + ] + [ + ''[[:alnum:]+_?=-][[:alnum:]+._?=-]*'' + ''coconutbattery-4.0.2,152'' + ] + [ + ''\.*(.*)'' + ''coconutbattery-4.0.2,152'' + ] + [ + ''^([[:digit:]]+)\.([[:digit:]]+)$'' + ''12.0'' + ] + [ + ''^([[:digit:]]+)\.([[:digit:]]+)$'' + ''12.8'' + ] + [ + ''^([[:digit:]]+)\.([[:digit:]]+)\.([[:digit:]]+)\.([[:digit:]]+)$'' + ''8.9.5.30'' + ] + [ + ''^([[:digit:]]+)\.([[:digit:]]+)\.([[:digit:]]+)\.([[:digit:]]+)$'' + ''9.7.1.26'' + ] + [ + ''^/.*'' + ''8.20'' + ] + [ + ''^/.*'' + ''8.20'' + ] + [ + ''^github.*'' + ''github.com'' + ] + [ + ''^github.*'' + ''github.com'' + ] + [ + ''^github.*'' + ''gitlab.inria.fr'' + ] + [ + ''^gitlab.*'' + ''gitlab.inria.fr'' + ] + [ + ''^github.*'' + ''gitlab.inria.fr'' + ] + [ + ''^gitlab.*'' + ''gitlab.inria.fr'' + ] + [ + ''^gitlab.*'' + ''sf.snu.ac.kr'' + ] + [ + ''^gitlab.*'' + ''sf.snu.ac.kr'' + ] + [ + ''^(@([^/]+)/)?([^/]+)$'' + ''draupnir'' + ] + [ + ''^[[:digit:]].*'' + ''0xproto'' + ] + [ + ''^[[:digit:]].*'' + ''3270'' + ] + [ + ''^[[:digit:]].*'' + ''adwaita-mono'' + ] + [ + ''^[[:digit:]].*'' + ''agave'' + ] + [ + ''^(@([^/]+)/)?([^/]+)$'' + ''renderer'' + ] + [ + ''[^[:space:]]*'' + ''900,906,908,1010,1012,1030'' + ] + [ + ''[^[:space:]]*'' + '''' + ] + [ + ''.*[0-9]_LIN(UX)?.sh'' + ''Wolfram_14.2.1_LIN.sh'' + ] + [ + ''.*[0-9]_LIN(UX)?.sh'' + ''Wolfram_14.2.1_LIN_Bndl.sh'' + ] + [ + ''.*[0-9]_LIN(UX)?.sh'' + ''Wolfram_14.2.0_LIN.sh'' + ] + [ + ''.*[0-9]_LIN(UX)?.sh'' + ''Wolfram_14.2.0_LIN_Bndl.sh'' + ] + [ + ''[A-Z]'' + ''b'' + ] + [ + ''[A-Z]'' + ''l'' + ] + [ + ''[A-Z]'' + ''E'' + ] + [ + ''[A-Z]'' + ''T'' + ] + [ + ''([0-9A-Za-z._])[0-9A-Za-z._-]*'' + ''pythoncheck.sh'' + ] + [ + ''([0-9A-Za-z._])[0-9A-Za-z._-]*'' + ''pythoncheck.sh'' + ] + [ + ''(.*)\.git'' + ''systemtap.git'' + ] + [ + ''[a-f0-9]*'' + ''release-5.2'' + ] + [ + ''[a-f0-9]*'' + ''b7a857659f8485ee3c6769c27a3e74b0af910746'' + ] + [ + ''.*pypy.*'' + ''/nix/store/8w718rm43x7z73xhw9d6vh8s4snrq67h-python3-3.12.10/bin/python3.12'' + ] + [ + ''(.*)\.git'' + ''gn'' + ] + [ + ''[a-f0-9]*'' + ''df98b86690c83b81aedc909ded18857296406159'' + ] + [ + ''.*-rc\..*'' + ''22.14.0'' + ] + [ + ''.*/linux-gecko-(.*).tar.bz2'' + ''https://static.replay.io/downloads/linux-gecko-20220722-71c783507536-b7eae18423ef.tar.bz2'' + ] + [ + ''.*/linux-node-(.*)'' + ''https://static.replay.io/downloads/linux-node-20220726-bac6d66b5ca1-5b966f2f136c'' + ] + [ + ''.*-DSQLITE_ENABLE_FTS3.*'' + ''-DSQLITE_ENABLE_COLUMN_METADATA -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_FTS3_TOKENIZER -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS5 -DSQLITE_ENABLE_PREUPDATE_HOOK -DSQLITE_ENABLE_RTREE -DSQLITE_ENABLE_SESSION -DSQLITE_ENABLE_STMT_SCANSTATUS -DSQLITE_ENABLE_UNLOCK_NOTIFY -DSQLITE_SOUNDEX -DSQLITE_SECURE_DELETE -DSQLITE_MAX_VARIABLE_NUMBER=250000 -DSQLITE_MAX_EXPR_DEPTH=10000'' + ] + [ + '' + [ + ]*(.*[^ + ])[ + ]*'' + '' + 0.3.2308 + '' + ] + [ + ''(.*)\.git'' + ''rtmpdump'' + ] + [ + ''.*;.*'' + ''Game'' + ] + [ + ''.*;.*'' + ''Game'' + ] + [ + ''(.+)+(.+)'' + ''17.0.14+7'' + ] + [ + ''^#(.*)$'' + ''20240715'' + ] + [ + ''[[:alpha:]_][[:alnum:]_]*(\.[[:alpha:]_][[:alnum:]_]*)*'' + ''external_deps_dirs'' + ] + [ + ''[[:alpha:]_][[:alnum:]_]*(\.[[:alpha:]_][[:alnum:]_]*)*'' + ''local_cache'' + ] + [ + ''^#(.*)$'' + ''20240715'' + ] + [ + ''.*-rc\..*'' + ''20.19.2'' + ] + [ + ''(.+)+(.+)'' + ''21.0.7+6'' + ] + [ + ''.*llvm-tblgen.*'' + ''-DLLVM_INSTALL_PACKAGE_DIR:STRING=/02qcpld1y6xhs5gz9bchpxaw0xdhmsp5dv88lh25r2ss44kh8dxz/lib/cmake/llvm'' + ] + [ + ''.*llvm-tblgen.*'' + ''-DLLVM_ENABLE_RTTI:BOOL=TRUE'' + ] + [ + ''.*llvm-tblgen.*'' + ''-DLLVM_TABLEGEN:STRING=/nix/store/xp9hkw8nsw9p81d69yvcg1yr6f7vh71c-llvm-tblgen-18.1.8/bin/llvm-tblgen'' + ] + [ + ''.*llvm-tblgen.*'' + ''-DLLVM_TABLEGEN_EXE:STRING=/nix/store/xp9hkw8nsw9p81d69yvcg1yr6f7vh71c-llvm-tblgen-18.1.8/bin/llvm-tblgen'' + ] + [ + ''(.+)-b(.+)'' + ''8u442-b06'' + ] + [ + ''.*-DSQLITE_ENABLE_FTS3.*'' + ''-DSQLITE_ENABLE_COLUMN_METADATA -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_FTS3_TOKENIZER -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS5 -DSQLITE_ENABLE_PREUPDATE_HOOK -DSQLITE_ENABLE_RTREE -DSQLITE_ENABLE_SESSION -DSQLITE_ENABLE_STMT_SCANSTATUS -DSQLITE_ENABLE_UNLOCK_NOTIFY -DSQLITE_SOUNDEX -DSQLITE_SECURE_DELETE -DSQLITE_MAX_VARIABLE_NUMBER=250000 -DSQLITE_MAX_EXPR_DEPTH=10000'' + ] + [ + ''([^/]*)/([^/]*)(/SNAPSHOT)?(/.*)?'' + ''jna/5.6.0'' + ] + [ + ''([^/]*)/([^/]*)(/SNAPSHOT)?(/.*)?'' + ''jna/5.6.0'' + ] + [ + ''[0-9]+'' + ''2'' + ] + [ + ''[0-9]+'' + ''3'' + ] + [ + ''[0-9]+'' + ''unstable'' + ] + [ + ''[[:space:]]*0*(-?[[:digit:]]+)[[:space:]]*'' + ''2'' + ] + [ + ''0+'' + ''2'' + ] + [ + ''0+'' + ''0'' + ] + [ + ''.*org/eclipse/jdt/ecj.*'' + ''https://repo.maven.apache.org/maven2/org/eclipse/jdt/ecj/maven-metadata.xml'' + ] + [ + ''.*[<>"'&].*'' + ''org.eclipse.jdt'' + ] + [ + ''.*[<>"'&].*'' + ''20241203050026'' + ] + [ + ''[a-zA-Z_][a-zA-Z0-9_'-]*'' + ''cpu'' + ] + [ + ''[a-zA-Z_][a-zA-Z0-9_'-]*'' + ''bits'' + ] + [ + ''armv[67]l-linux'' + ''x86_64-linux'' + ] + [ + ''armv[67]l-linux'' + ''x86_64-linux'' + ] + [ + ''0+'' + ''0'' + ] + [ + ''[0-9]+'' + ''rc'' + ] + [ + ''.*tensorflow_cpu.*'' + ''https://storage.googleapis.com/tensorflow/versions/2.19.0/tensorflow_cpu-2.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl'' + ] + [ + ''git\+([^?]+)(\?(rev|tag|branch)=(.*))?#(.*)'' + ''git+https://github.com/GRA0007/google-cloud-rs.git#4a2db92efd57a896e14d18877458c6ae43418aec'' + ] + [ + ''git\+([^?]+)(\?(rev|tag|branch)=(.*))?#(.*)'' + ''git+https://github.com/GRA0007/google-cloud-rs.git#4a2db92efd57a896e14d18877458c6ae43418aec'' + ] + [ + ''mpv[-_](.*)'' + ''detect-image'' + ] + [ + ''.*org/bouncycastle/bcutil-lts8on.*'' + ''https://plugins.gradle.org/m2/org/bouncycastle/bcutil-lts8on/maven-metadata.xml'' + ] + [ + ''^.*-unstable-([[:digit:]]{4})-([[:digit:]]{2})-([[:digit:]]{2})$'' + ''0.9.0'' + ] + [ + ''(.+)/(.+)'' + ''rejeep/ansi.el'' + ] + [ + ''^.*-unstable-([[:digit:]]{4})-([[:digit:]]{2})-([[:digit:]]{2})$'' + ''20230306.1823'' + ] + [ + ''(.+)/(.+)'' + ''rejeep/commander.el'' + ] + [ + ''mpv[-_](.*)'' + ''equalizer'' + ] + [ + ''(.*)-([^-]*)-([^-]*)'' + ''2.2.4-20231021.200112-6'' + ] + [ + ''(.*)-([^-]*)-([^-]*)'' + ''2.2.4-20231021.200112-6'' + ] + [ + ''.*com/badlogicgames/gdx-controllers/gdx-controllers-core.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/com/badlogicgames/gdx-controllers/gdx-controllers-core/2.2.4-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*com/badlogicgames/gdx-controllers/gdx-controllers-desktop.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/com/badlogicgames/gdx-controllers/gdx-controllers-desktop/2.2.4-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''^(#.*|$)'' + ''.git'' + ] + [ + ''^(#.*|$)'' + ''__pycache__'' + ] + [ + ''^(#.*|$)'' + '''' + ] + [ + ''^(!?)(.*)'' + ''.git'' + ] + [ + ''^(/?)(.*)'' + ''\.git'' + ] + [ + ''.+/.+'' + ''\.git'' + ] + [ + ''^(.*)/$'' + ''(^|.*/)\.git'' + ] + [ + ''(^|.*/)\.git'' + ''.flake8'' + ] + [ + ''^(!?)(.*)'' + ''__pycache__'' + ] + [ + ''^(/?)(.*)'' + ''__pycache__'' + ] + [ + ''.+/.+'' + ''__pycache__'' + ] + [ + ''^(.*)/$'' + ''(^|.*/)__pycache__'' + ] + [ + ''(^|.*/)__pycache__'' + ''.flake8'' + ] + [ + ''^(#.*|$)'' + '''' + ] + [ + ''(^|.*/)\.git'' + ''.gitignore'' + ] + [ + ''(^|.*/)__pycache__'' + ''.gitignore'' + ] + [ + ''^[a-fA-F0-9]{40}$'' + ''3a667bdb3d7f0955a5a51c8468eac83210c1439e'' + ] + [ + ''.*com/android/tools/build/gradle.*'' + ''https://repo.maven.apache.org/maven2/com/android/tools/build/gradle/maven-metadata.xml'' + ] + [ + ''^[a-fA-F0-9]{40}$'' + ''dc0a228a5544988d4a920cfb40be9cd28db41423'' + ] + [ + ''(.+)-b(.+)'' + ''8u442-b06'' + ] + [ + ''.*com/tobiasdiez/easybind.*'' + ''https://oss.sonatype.org/content/groups/public/com/tobiasdiez/easybind/2.2.1-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*org/hamcrest/hamcrest.*'' + ''https://repo.maven.apache.org/maven2/org/hamcrest/hamcrest/maven-metadata.xml'' + ] + [ + ''^.*CONFIG_BOARD_DIRECTORY="([a-zA-Z0-9_]+)".*$'' + '' + # CONFIG_LOW_LEVEL_OPTIONS is not set + # CONFIG_MACH_AVR is not set + # CONFIG_MACH_ATSAM is not set + # CONFIG_MACH_ATSAMD is not set + # CONFIG_MACH_LPC176X is not set + # CONFIG_MACH_STM32 is not set + # CONFIG_MACH_HC32F460 is not set + # CONFIG_MACH_RPXXXX is not set + # CONFIG_MACH_PRU is not set + # CONFIG_MACH_AR100 is not set + # CONFIG_MACH_LINUX is not set + CONFIG_MACH_SIMU=y + CONFIG_BOARD_DIRECTORY="simulator" + CONFIG_CLOCK_FREQ=20000000 + CONFIG_SERIAL=y + CONFIG_SIMULATOR_SELECT=y + CONFIG_SERIAL_BAUD=250000 + CONFIG_USB_VENDOR_ID=0x1d50 + CONFIG_USB_DEVICE_ID=0x614e + CONFIG_USB_SERIAL_NUMBER="12345" + CONFIG_WANT_ADC=y + CONFIG_WANT_SPI=y + CONFIG_WANT_SOFTWARE_SPI=y + CONFIG_WANT_HARD_PWM=y + CONFIG_WANT_BUTTONS=y + CONFIG_WANT_TMCUART=y + CONFIG_WANT_NEOPIXEL=y + CONFIG_WANT_PULSE_COUNTER=y + CONFIG_WANT_ST7920=y + CONFIG_WANT_HD44780=y + CONFIG_WANT_ADXL345=y + CONFIG_WANT_LIS2DW=y + CONFIG_WANT_THERMOCOUPLE=y + CONFIG_WANT_HX71X=y + CONFIG_WANT_ADS1220=y + CONFIG_WANT_SENSOR_ANGLE=y + CONFIG_NEED_SENSOR_BULK=y + CONFIG_CANBUS_FREQUENCY=1000000 + CONFIG_INLINE_STEPPER_HACK=y + CONFIG_HAVE_GPIO=y + CONFIG_HAVE_GPIO_ADC=y + CONFIG_HAVE_GPIO_SPI=y + CONFIG_HAVE_GPIO_HARD_PWM=y + '' + ] + [ + ''[^.]*[.][^.]*-.*'' + ''5.15.183-rt85'' + ] + [ + ''[^.]*[.][^.]*-.*'' + ''6.1.134-rt51'' + ] + [ + ''^\.sw[a-z]$'' + ''package.nix'' + ] + [ + ''^\..*\.sw[a-z]$'' + ''package.nix'' + ] + [ + ''^\.sw[a-z]$'' + ''pyproject.toml'' + ] + [ + ''^\..*\.sw[a-z]$'' + ''pyproject.toml'' + ] + [ + ''mpv[-_](.*)'' + ''mpv-notify-send'' + ] + [ + ''mpv[-_](.*)'' + ''mpv-playlistmanager'' + ] + [ + ''.*ch/qos/logback/logback-core.*'' + ''https://repo.maven.apache.org/maven2/ch/qos/logback/logback-core/maven-metadata.xml'' + ] + [ + ''.*commons-codec/commons-codec.*'' + ''https://repo.maven.apache.org/maven2/commons-codec/commons-codec/maven-metadata.xml'' + ] + [ + ''/[0-9a-z]{52}'' + ''/run/opengl-driver'' + ] + [ + ''/[0-9a-z]{52}'' + ''/dev/dri'' + ] + [ + ''<(.*)>'' + ''_module'' + ] + [ + ''<(.*)>'' + ''args'' + ] + [ + ''<(.*)>'' + '''' + ] + [ + ''<(.*)>'' + '''' + ] + [ + ''[a-zA-Z_][a-zA-Z0-9_'-]*'' + ''2bwm'' + ] + [ + ''[a-zA-Z_][a-zA-Z0-9_'-]*'' + ''pm.max_children'' + ] + [ + ''(pypy|python)([[:digit:]]*)'' + ''override'' + ] + [ + ''(pypy|python)([[:digit:]]*)'' + ''overrideDerivation'' + ] + [ + ''(pypy|python)([[:digit:]]*)'' + ''pypy27'' + ] + [ + ''(pypy|python)([[:digit:]]*)'' + ''pypy310'' + ] + [ + ''^ref: (.*)$'' + ''ref: refs/heads/master'' + ] + [ + ''^ref: (.*)$'' + ''f870c6ccc8951fc48aeb293cf3e98ade6ac42668'' + ] + [ + ''^ref: (.*)$'' + ''ref: refs/heads/master'' + ] + [ + ''^ref: (.*)$'' + ''f870c6ccc8951fc48aeb293cf3e98ade6ac42668'' + ] + [ + ''.*\.post[0-9]+'' + ''1.7.2'' + ] + [ + ''.*tensorflow_cpu.*'' + ''https://storage.googleapis.com/tensorflow/versions/2.19.0/tensorflow_cpu-2.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl'' + ] + [ + ''.*tensorflow_cpu.*'' + ''https://storage.googleapis.com/tensorflow/versions/2.19.0/tensorflow-2.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl'' + ] + [ + ''.*\.post[0-9]+'' + ''1.7.2'' + ] + [ + ''.*darwin.*'' + ''i686-cygwin'' + ] + [ + ''.*darwin.*'' + ''x86_64-cygwin'' + ] + [ + ''.*darwin.*'' + ''x86_64-darwin'' + ] + [ + ''.*darwin.*'' + ''aarch64-darwin'' + ] + [ + ''.*com/badlogicgames/gdx-controllers/gdx-controllers-core.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/com/badlogicgames/gdx-controllers/gdx-controllers-core/2.2.4-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*com/badlogicgames/gdx-controllers/gdx-controllers-desktop.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/com/badlogicgames/gdx-controllers/gdx-controllers-desktop/2.2.4-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*/linux-recordreplay-(.*).tgz'' + ''https://static.replay.io/downloads/linux-recordreplay-b7eae18423ef.tgz'' + ] + [ + ''.*/linux-node-(.*)'' + ''https://static.replay.io/downloads/linux-node-20220726-bac6d66b5ca1-5b966f2f136c'' + ] + [ + ''.*-large-wordlist.*'' + ''hunspell-dict-cs-cz-libreoffice-6.3.0.4'' + ] + [ + ''.*-large-wordlist.*'' + ''hunspell-dict-da-dk-2.5.189'' + ] + [ + ''.*-large-wordlist.*'' + ''hunspell-dict-en-au-large-wordlist-2018.04.16'' + ] + [ + ''.*-large-wordlist.*'' + ''hunspell-dict-en-ca-large-wordlist-2018.04.16'' + ] + [ + ''.*com/fazecast/jSerialComm.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/com/fazecast/jSerialComm/2.11.1-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*net/java/dev/jna/jna-platform.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/net/java/dev/jna/jna-platform/5.1.1-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*net/java/dev/jna/jna-platform.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/net/java/dev/jna/jna-platform/maven-metadata.xml'' + ] + [ + ''.*net/java/dev/jna/jna.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/net/java/dev/jna/jna/5.1.1-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*net/java/dev/jna/jna.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/net/java/dev/jna/jna/maven-metadata.xml'' + ] + [ + ''.*org/java-websocket/Java-WebSocket.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/org/java-websocket/Java-WebSocket/1.3.10-SNAPSHOT/maven-metadata.xml'' + ] + [ + ''.*org/java-websocket/Java-WebSocket.*'' + ''https://oss.sonatype.org/content/repositories/snapshots/org/java-websocket/Java-WebSocket/maven-metadata.xml'' + ] + [ + ''.*com/melloware/jintellitype.*'' + ''https://repo.maven.apache.org/maven2/com/melloware/jintellitype/maven-metadata.xml'' + ] + [ + ''[0-9.]*([a-z]*)'' + ''2025.1.1'' + ] + [ + ''.*com/velocitypowered/velocity-brigadier.*'' + ''https://repo.papermc.io/repository/maven-public/com/velocitypowered/velocity-brigadier/1.0.0-SNAPSHOT/maven-metadata.xml'' + ] + ] diff --git a/tests/functional/lang/eval-okay-sort.exp b/tests/functional/lang/eval-okay-sort.exp index 899119e20..fcb3b2224 100644 --- a/tests/functional/lang/eval-okay-sort.exp +++ b/tests/functional/lang/eval-okay-sort.exp @@ -1 +1 @@ -[ [ 42 77 147 249 483 526 ] [ 526 483 249 147 77 42 ] [ "bar" "fnord" "foo" "xyzzy" ] [ { key = 1; value = "foo"; } { key = 1; value = "fnord"; } { key = 2; value = "bar"; } ] [ [ ] [ ] [ 1 ] [ 1 4 ] [ 1 5 ] [ 1 6 ] [ 2 ] [ 2 3 ] [ 3 ] [ 3 ] ] ] +[ [ 42 77 147 249 483 526 ] [ 526 483 249 147 77 42 ] [ "bar" "fnord" "foo" "xyzzy" ] [ { key = 1; value = "foo"; } { key = 1; value = "fnord"; } { key = 2; value = "bar"; } ] [ { key = 1; value = "foo"; } { key = 1; value = "foo2"; } { key = 1; value = "foo3"; } { key = 1; value = "foo4"; } { key = 1; value = "foo5"; } { key = 1; value = "foo6"; } { key = 1; value = "foo7"; } { key = 1; value = "foo8"; } { key = 2; value = "bar"; } { key = 2; value = "bar2"; } { key = 2; value = "bar3"; } { key = 2; value = "bar4"; } { key = 2; value = "bar5"; } { key = 3; value = "baz"; } { key = 3; value = "baz2"; } { key = 3; value = "baz3"; } { key = 3; value = "baz4"; } { key = 4; value = "biz1"; } ] [ [ ] [ ] [ 1 ] [ 1 4 ] [ 1 5 ] [ 1 6 ] [ 2 ] [ 2 3 ] [ 3 ] [ 3 ] ] ] diff --git a/tests/functional/lang/eval-okay-sort.nix b/tests/functional/lang/eval-okay-sort.nix index 412bda4a0..7a3b7f71b 100644 --- a/tests/functional/lang/eval-okay-sort.nix +++ b/tests/functional/lang/eval-okay-sort.nix @@ -37,6 +37,80 @@ with builtins; value = "fnord"; } ]) + (sort (x: y: x.key < y.key) [ + { + key = 1; + value = "foo"; + } + { + key = 2; + value = "bar"; + } + { + key = 1; + value = "foo2"; + } + { + key = 2; + value = "bar2"; + } + { + key = 2; + value = "bar3"; + } + { + key = 2; + value = "bar4"; + } + { + key = 1; + value = "foo3"; + } + { + key = 3; + value = "baz"; + } + { + key = 3; + value = "baz2"; + } + { + key = 1; + value = "foo4"; + } + { + key = 3; + value = "baz3"; + } + { + key = 1; + value = "foo5"; + } + { + key = 1; + value = "foo6"; + } + { + key = 2; + value = "bar5"; + } + { + key = 3; + value = "baz4"; + } + { + key = 1; + value = "foo7"; + } + { + key = 4; + value = "biz1"; + } + { + key = 1; + value = "foo8"; + } + ]) (sort lessThan [ [ 1 diff --git a/tests/functional/logging.sh b/tests/functional/logging.sh index ddb1913ad..83df9a45d 100755 --- a/tests/functional/logging.sh +++ b/tests/functional/logging.sh @@ -33,3 +33,12 @@ if isDaemonNewer "2.26"; then # Build works despite ill-formed structured build log entries. expectStderr 0 nix build -f ./logging/unusual-logging.nix --no-link | grepQuiet 'warning: Unable to handle a JSON message from the derivation builder:' fi + +# Test json-log-path. +if [[ "$NIX_REMOTE" != "daemon" ]]; then + clearStore + nix build -vv --file dependencies.nix --no-link --json-log-path "$TEST_ROOT/log.json" 2>&1 | grepQuiet 'building.*dependencies-top.drv' + jq < "$TEST_ROOT/log.json" + grep '{"action":"start","fields":\[".*-dependencies-top.drv","",1,1\],"id":.*,"level":3,"parent":0' "$TEST_ROOT/log.json" >&2 + (( $(grep '{"action":"msg","level":5,"msg":"executing builder .*"}' "$TEST_ROOT/log.json" | wc -l) == 5 )) +fi diff --git a/tests/functional/meson.build b/tests/functional/meson.build index b2005d9d9..cd1bc6319 100644 --- a/tests/functional/meson.build +++ b/tests/functional/meson.build @@ -73,6 +73,7 @@ suites = [ 'gc-runtime.sh', 'tarball.sh', 'fetchGit.sh', + 'fetchGitShallow.sh', 'fetchurl.sh', 'fetchPath.sh', 'fetchTree-file.sh', @@ -133,6 +134,7 @@ suites = [ 'post-hook.sh', 'function-trace.sh', 'formatter.sh', + 'flamegraph-profiler.sh', 'eval-store.sh', 'why-depends.sh', 'derivation-json.sh', diff --git a/tests/functional/plugins/meson.build b/tests/functional/plugins/meson.build index ae66e3036..41050ffc1 100644 --- a/tests/functional/plugins/meson.build +++ b/tests/functional/plugins/meson.build @@ -3,6 +3,7 @@ libplugintest = shared_module( 'plugintest.cc', dependencies : [ dependency('nix-expr'), + # hack for trailing newline ], build_by_default : false, ) diff --git a/tests/functional/repl.sh b/tests/functional/repl.sh index 15846bb7f..d75b80bb0 100755 --- a/tests/functional/repl.sh +++ b/tests/functional/repl.sh @@ -67,7 +67,7 @@ testRepl () { # Simple test, try building a drv testRepl # Same thing (kind-of), but with a remote store. -testRepl --store "$TEST_ROOT/store?real=$NIX_STORE_DIR" +testRepl --store "$TEST_ROOT/other-root?real=$NIX_STORE_DIR" # Remove ANSI escape sequences. They can prevent grep from finding a match. stripColors () { @@ -157,7 +157,33 @@ foo + baz ' "3" \ ./flake ./flake\#bar -# Test the `:reload` mechanism with flakes: +testReplResponse $' +:a { a = 1; b = 2; longerName = 3; "with spaces" = 4; } +' 'Added 4 variables. +a, b, longerName, "with spaces" +' + +cat < attribute-set.nix +{ + a = 1; + b = 2; + longerName = 3; + "with spaces" = 4; +} +EOF +testReplResponse ' +:l ./attribute-set.nix +' 'Added 4 variables. +a, b, longerName, "with spaces" +' + +testReplResponseNoRegex $' +:a builtins.foldl\' (x: y: x // y) {} (map (x: { ${builtins.toString x} = x; }) (builtins.genList (x: x) 23)) +' 'Added 23 variables. +"0", "1", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "2", "20", "21", "22", "3", "4", "5", "6" +... and 3 more; view with :ll' + +# Test the `:reload` mechansim with flakes: # - Eval `./flake#changingThing` # - Modify the flake # - Re-eval it @@ -278,6 +304,12 @@ testReplResponseNoRegex ' } ' +# Don't prompt for more input when getting unexpected EOF in imported files. +testReplResponse " +import $testDir/lang/parse-fail-eof-pos.nix +" \ +'.*error: syntax error, unexpected end of file.*' + # TODO: move init to characterisation/framework.sh badDiff=0 badExitCode=0 @@ -323,7 +355,8 @@ runRepl () { -e "s@$testDir@/path/to/tests/functional@g" \ -e "s@$testDirNoUnderscores@/path/to/tests/functional@g" \ -e "s@$nixVersion@@g" \ - -e "s@Added [0-9]* variables@Added variables@g" \ + -e "/Added [0-9]* variables/{s@ [0-9]* @ @;n;d}" \ + -e '/\.\.\. and [0-9]* more; view with :ll/d' \ | grep -vF $'warning: you don\'t have Internet access; disabling some network-dependent features' \ ; } diff --git a/tests/functional/structured-attrs.sh b/tests/functional/structured-attrs.sh index 64d136e99..2bd9b4aaf 100755 --- a/tests/functional/structured-attrs.sh +++ b/tests/functional/structured-attrs.sh @@ -40,3 +40,14 @@ jsonOut="$(nix print-dev-env -f structured-attrs-shell.nix --json)" test "$(<<<"$jsonOut" jq '.structuredAttrs|keys|.[]' -r)" = "$(printf ".attrs.json\n.attrs.sh")" test "$(<<<"$jsonOut" jq '.variables.outputs.value.out' -r)" = "$(<<<"$jsonOut" jq '.structuredAttrs.".attrs.json"' -r | jq -r '.outputs.out')" + +# Hacky way of making structured attrs. We should preserve for now for back compat, but also deprecate. + +hackyExpr='derivation { name = "a"; system = "foo"; builder = "/bin/sh"; __json = builtins.toJSON { a = 1; }; }' + +# Check for deprecation message +expectStderr 0 nix-instantiate --expr "$hackyExpr" --eval --strict | grepQuiet "In derivation 'a': setting structured attributes via '__json' is deprecated, and may be disallowed in future versions of Nix. Set '__structuredAttrs = true' instead." + +# Check it works with the expected structured attrs +hacky=$(nix-instantiate --expr "$hackyExpr") +nix derivation show "$hacky" | jq --exit-status '."'"$hacky"'".structuredAttrs | . == {"a": 1}' diff --git a/tests/functional/supplementary-groups.sh b/tests/functional/supplementary-groups.sh index 400333f7d..a667d3e99 100755 --- a/tests/functional/supplementary-groups.sh +++ b/tests/functional/supplementary-groups.sh @@ -14,7 +14,6 @@ execUnshare <&2") + + # Building in /tmp should fail for security reasons. + err = machine.fail("nix build --offline --store /tmp/nix --expr 'builtins.derivation { name = \"foo\"; system = \"x86_64-linux\"; builder = \"/foo\"; }' 2>&1") + assert "is world-writable" in err ''; } diff --git a/tests/nixos/nix-docker.nix b/tests/nixos/nix-docker.nix index c58a00cdd..f1c218585 100644 --- a/tests/nixos/nix-docker.nix +++ b/tests/nixos/nix-docker.nix @@ -1,21 +1,15 @@ # Test the container built by ../../docker.nix. { - lib, config, - nixpkgs, - hostPkgs, ... }: let pkgs = config.nodes.machine.nixpkgs.pkgs; - nixImage = import ../../docker.nix { - inherit (config.nodes.machine.nixpkgs) pkgs; - }; - nixUserImage = import ../../docker.nix { - inherit (config.nodes.machine.nixpkgs) pkgs; + nixImage = pkgs.callPackage ../../docker.nix { }; + nixUserImage = pkgs.callPackage ../../docker.nix { name = "nix-user"; uid = 1000; gid = 1000; diff --git a/tests/nixos/user-sandboxing/default.nix b/tests/nixos/user-sandboxing/default.nix index 028efd17f..3f6b575b0 100644 --- a/tests/nixos/user-sandboxing/default.nix +++ b/tests/nixos/user-sandboxing/default.nix @@ -104,15 +104,16 @@ in # Wait for the build to be ready # This is OK because it runs as root, so we can access everything - machine.wait_for_file("/tmp/nix-build-open-build-dir.drv-0/build/syncPoint") + machine.wait_until_succeeds("stat /nix/var/nix/builds/nix-build-open-build-dir.drv-*/build/syncPoint") + dir = machine.succeed("ls -d /nix/var/nix/builds/nix-build-open-build-dir.drv-*").strip() # But Alice shouldn't be able to access the build directory - machine.fail("su alice -c 'ls /tmp/nix-build-open-build-dir.drv-0/build'") - machine.fail("su alice -c 'touch /tmp/nix-build-open-build-dir.drv-0/build/bar'") - machine.fail("su alice -c 'cat /tmp/nix-build-open-build-dir.drv-0/build/foo'") + machine.fail(f"su alice -c 'ls {dir}/build'") + machine.fail(f"su alice -c 'touch {dir}/build/bar'") + machine.fail(f"su alice -c 'cat {dir}/build/foo'") # Tell the user to finish the build - machine.succeed("echo foo > /tmp/nix-build-open-build-dir.drv-0/build/syncPoint") + machine.succeed(f"echo foo > {dir}/build/syncPoint") with subtest("Being able to execute stuff as the build user doesn't give access to the build dir"): machine.succeed(r""" @@ -124,16 +125,17 @@ in args = [ (builtins.storePath "${create-hello-world}") ]; }' >&2 & """.strip()) - machine.wait_for_file("/tmp/nix-build-innocent.drv-0/build/syncPoint") + machine.wait_until_succeeds("stat /nix/var/nix/builds/nix-build-innocent.drv-*/build/syncPoint") + dir = machine.succeed("ls -d /nix/var/nix/builds/nix-build-innocent.drv-*").strip() # The build ran as `nixbld1` (which is the only build user on the # machine), but a process running as `nixbld1` outside the sandbox # shouldn't be able to touch the build directory regardless - machine.fail("su nixbld1 --shell ${pkgs.busybox-sandbox-shell}/bin/sh -c 'ls /tmp/nix-build-innocent.drv-0/build'") - machine.fail("su nixbld1 --shell ${pkgs.busybox-sandbox-shell}/bin/sh -c 'echo pwned > /tmp/nix-build-innocent.drv-0/build/result'") + machine.fail(f"su nixbld1 --shell ${pkgs.busybox-sandbox-shell}/bin/sh -c 'ls {dir}/build'") + machine.fail(f"su nixbld1 --shell ${pkgs.busybox-sandbox-shell}/bin/sh -c 'echo pwned > {dir}/build/result'") # Finish the build - machine.succeed("echo foo > /tmp/nix-build-innocent.drv-0/build/syncPoint") + machine.succeed(f"echo foo > {dir}/build/syncPoint") # Check that the build was not affected machine.succeed(r"""