Show which element(s) are involved at each error point:
- When an element is missing the "key" attribute, show the element
- When an element is not an attribute set, show the element
- When comparing keys fails, show both elements being compared
- When calling operator fails, show which element was being processed
This provides concrete context using ValuePrinter with errorPrintOptions.
Note: errorPrintOptions uses maxDepth=10 by default, which may print
quite deeply nested structures in error messages. This could potentially
be overwhelming, but follows the existing default for error contexts.
Best I can tell this was never supposed to be exposed to the user
and has been this way since 2.19.
2.18 did not expose this file to the user:
nix run nix/2.18-maintenance -- eval --expr "import <nix/derivation-internal.nix>"
error: getting status of '/__corepkgs__/derivation-internal.nix': No such file or directory
fetchToStore() caching was broken because it uses the fingerprint of
the accessor, but now that the accessor (typically storeFS) is a
composite (like MountedSourceAccessor or AllowListSourceAccessor),
there was no fingerprint anymore. So fetchToStore now uses the new
getFingerprint() method to get the specific fingerprint for the
subpath.
This adds regression tests for fromTOML overflow/underflow behavior.
Previous versions of toml11 used to saturate, but this was never an
intended behavior (and Snix/Nix 2.3/toml11 >= 4.0 validate this).
(cherry picked from Lix [1,2])
[1]: 7ee442079d
[2]: 4de09b6b54
Current test suite doesn't cover the subsecond formatting at
all and toml11 is quite finicky with that. We should at the very
least test its behavior to avoid silent breakages on updates.
c39cc00404 has added assertions for
all Value accesses and the following case has started failing with
an `unreachable`:
(/tmp/fun.nix):
```nix
{a}: a
```
```
$ nix eval --impure --expr 'import /tmp/fun.nix {a="a";b="b";}'
```
This would crash:
```
terminating due to unexpected unrecoverable internal error: Unexpected condition in getStorage at ../include/nix/expr/value.hh:844
```
This is not a regression, but rather surfaces an existing problem, which previously
was left undiagnosed. In the case of an import `fun` is the `import` primOp, so that read is invalid
and previously this resulted in an access into an inactive union member, which is UB.
The correct thing to use is `vCur`. Identical problem also affected the case of a missing argument.
Add previously failing test cases to the functional/lang test suite.
Fixes#13448.
libstdc++'s std::stable_sort and new builtins.sort implementation
special-case ranges with length less than or equal to 16 and delegate
to insertionsort.
Having a larger e2e test would allow catching sort stability issues
at functional level as well.
These tests have been collected from nixpkgs f870c6ccc8951fc48aeb293cf3e98ade6ac42668
usage of builtins.match for x86_64-linux eval system. At most 2 matching and
non-matching cases are included for each encountered regex. This should
hopefully add more confidence when possibly trying to switch the regex implementation
in the future.
As a prelude to making "or" work like a normal variable, emit a warning
any time the "fn or" production is used in a context that will change
how it is parsed when that production is refactored.
In detail: in the future, OR_KW will be moved to expr_simple, and the
cursed ExprCall production that is currently part of the expr_select
nonterminal will be generated "normally" in expr_app instead. Any
productions that accept an expr_select will be affected, except for the
expr_app nonterminal itself (because, while expr_app has a production
accepting a bare expr_select, its other production will continue to
accept "fn or" expressions). So all we need to do is emit an appropriate
warning when an expr_simple representing a cursed ExprCall is accepted
in one of those productions without first going through expr_app.
As the warning message describes, users can suppress the warning by
wrapping their problematic "fn or" expressions in parentheses. For
example, "f g or" can be made future-proof by rewriting it as
"f (g or)"; similarly "[ x y or ]" can be rewritten as "[ x (y or) ]",
etc. The parentheses preserve the current grouping behavior, as in the
future "f g or" will be parsed as "(f g) or", just like
"f g anything-else" is grouped. (Mechanically, this suppresses the
warning because the problem ExprCalls go through the
"expr_app : expr_select" production, which resets the cursed status on
the ExprCall.)
This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
https://git.lix.systems/lix-project/lix/issues/445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
Fixes: https://github.com/NixOS/nix/issues/10968
Original-CL: https://gerrit.lix.systems/c/lix/+/1596
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
The code that counts the number of elided attrs incorrectly used the
per-printer "global" attribute counter instead of a counter that
was relevant only to the current attribute set.
This bug flew under the radar because often the attribute sets aren't
nested, not big enough, or we wouldn't pay attention to the numbers.
I've noticed the issue because the difference underflowed.
Although this behavior is tested by the functional test
lang/eval-fail-bad-string-interpolation-4.nix, the underflow slipped
through review. A simpler reproducer would be as follows, but I
haven't added it to the test suite to keep it simple and marginally
faster.
```
$ nix run nix/2.23.1 -- eval --expr '"" + (let v = { a = { a = 1; b = 2; c = 1; d = 1; e = 1; f = 1; g = 1; h = 1; }; b = { a = 1; b = 1; c = 1; }; }; in builtins.deepSeq v v)'
error:
… while evaluating a path segment
at «string»:1:6:
1| "" + (let v = { a = { a = 1; b = 2; c = 1; d = 1; e = 1; f = 1; g = 1; h = 1; }; b = { a = 1; b = 1; c = 1; }; }; in builtins.deepSeq v v)
| ^
error: cannot coerce a set to a string: { a = { a = 1; b = 2; c = 1; d = 1; e = 1; f = 1; g = 1; h = 1; }; b = { a = 1; «4294967289 attributes elided» }; }
```
this needs a string comparison because there seems to be no other way to
get that information out of bison. usually the location info is going to
be correct (pointing at a bad token), but since EOF isn't a token as
such it'll be wrong in that this case.
this hasn't shown up much so far because a single line ending *is* a
token, so any file formatted in the usual manner (ie, ending in a line
ending) would have its EOF position reported correctly.
the parser treats a plain \r as a newline, error reports do not. this
can lead to interesting divergences if anything makes use of this
feature, with error reports pointing to wrong locations in the input (or
even outside the input altogether).