We now have functional tests for these. The unit tests added negligible
value while imposing a much higher maintenance cost.
The maintenance cost is high:
- No automatic accept option
- They broke 5+ times during this session due to implementation changes (trace count, ordering)
- They require understanding ANSI escape codes, Uncolored() wrappers, trace reversal
- They test empty traces HintFmt("") from withTrace(pos, "") - pure implementation detail
- They're fragile: adding any trace anywhere breaks the exact count assertions
The additional value over functional tests is minimal:
- Functional tests already verify the error message
- Functional tests already show trace order and content (as users see it, helps review)
- Unit tests verify "exactly 3 traces, not 2 or 4" - but users don't count traces
- Unit tests verify empty traces exist - but users never see them
The white-box testing catches the wrong things:
- It catches "you added helpful context" as a failure
- It doesn't catch "the context is confusing" (which functional tests would show)
- It enforces implementation details that should be allowed to evolve
Show which element(s) are involved at each error point:
- When an element is missing the "key" attribute, show the element
- When an element is not an attribute set, show the element
- When comparing keys fails, show both elements being compared
- When calling operator fails, show which element was being processed
This provides concrete context using ValuePrinter with errorPrintOptions.
Note: errorPrintOptions uses maxDepth=10 by default, which may print
quite deeply nested structures in error messages. This could potentially
be overwhelming, but follows the existing default for error contexts.
* It is tough to contribute to a project that doesn't use a formatter,
* It is extra hard to contribute to a project which has configured the formatter, but ignores it for some files
* Code formatting makes it harder to hide obscure / weird bugs by accident or on purpose,
Let's rip the bandaid off?
Note that PRs currently in flight should be able to be merged relatively easily by applying `clang-format` to their tip prior to merge.
For example, instead of doing
#include "nix/store-config.hh"
#include "nix/derived-path.hh"
Now do
#include "nix/store/config.hh"
#include "nix/store/derived-path.hh"
This was originally planned in the issue, and also recent requested by
Eelco.
Most of the change is purely mechanical. There is just one small
additional issue. See how, in the example above, we took this
opportunity to also turn `<comp>-config.hh` into `<comp>/config.hh`.
Well, there was already a `nix/util/config.{cc,hh}`. Even though there
is not a public configuration header for libutil (which also would be
called `nix/util/config.{cc,hh}`) that's still confusing, To avoid any
such confusion, we renamed that to `nix/util/configuration.{cc,hh}`.
Finally, note that the libflake headers already did this, so we didn't
need to do anything to them. We wouldn't want to mistakenly get
`nix/flake/flake/flake.hh`!
Progress on #7876
The short answer for why we need to do this is so we can consistently do
`#include "nix/..."`. Without this change, there are ways to still make
that work, but they are hacky, and they have downsides such as making it
harder to make sure headers from the wrong Nix library (e..g.
`libnixexpr` headers in `libnixutil`) aren't being used.
The C API alraedy used `nix_api_*`, so its headers are *not* put in
subdirectories accordingly.
Progress on #7876
We resisted doing this for a while because it would be annoying to not
have the header source file pairs close by / easy to change file
path/name from one to the other. But I am ameliorating that with
symlinks in the next commit.
This uses the single-threaded C-based routines from libblake3.
This is not optimal performance-wise but should be a good starting point
for nix compatibility with BLAKE3 hashing until a more performant
implementation based on the multi-threaded BLAKE3 routines
(written in Rust) can be developed.