This systematizes the way our s3:// URLs are parsed in filetransfer.cc.
Yoinked out and refactored out of [1].
[1]: https://github.com/NixOS/nix/pull/13752
Co-authored-by: Bernardo Meurer Costa <beme@anthropic.com>
Compilers in nixpkgs have caught up and major distros
should also have recent enough compilers. It would be
nice to have newer features like more full featured
ranges and deducing this.
Turns out we didn't have tests for some of the important behavior introduced
for flake reference fragments and url queries [1]. This is rather important
and is relied upon by existing tooling. This fixes up these exact cases before
handing off the URL to the Boost.URL parser.
To the best of my knowledge this implements the same behavior as prior regex-based
parser did [2]:
> fragmentRegex = "(?:" + pcharRegex + "|[/? \"^])*";
> queryRegex = "(?:" + pcharRegex + "|[/? \"])*";
[1]: 9c0a09f09f
[2]: https://github.com/NixOS/nix/blob/2.30.2/src/libutil/include/nix/util/url-parts.hh
This is good for e.g. `std::string_view` and `StringMap`.
Needed by #11139
Co-authored-by: Sergei Zimmerman <145775305+xokdvium@users.noreply.github.com>
Clang refused to do a narrowing conversion in an initializer list:
```
local-keys.cc:56:90: note: insert an explicit cast to silence this issue
return name + ":" + base64::encode(std::as_bytes(std::span<const unsigned char>{sig, sigLen}));
^~~~~~
static_cast<size_type>( )
```
Fixes usage of `#` symbol in the reference name.
This also seems to identify several deficiencies in the libgit2 refname
validation code wrt to DEL symbol and a singular `@` symbol [1].
[1]: https://git-scm.com/docs/git-check-ref-format#_description
The implicit dependency on refLength (which is the StorePath::HashLen)
is not good. Also the companion tests and benchmarks are already in libstore-tests.
- No more private constructor that is kinda weird
- Two new static functions, `baseFromSize` and `baseFromSize`, that do
one thing, and one thing only (simple).
- Two `Hash::parse*` that previously used the private constructor now
can use these two functions directly.
- The remaining `Hash::parseAny*` methods, which are inherently more
complex, are written in terms of a `parseAnyHelper` static function
which is also complex, but keeps the complexity in one spot.
This patch allows users to specify the connection port
in the store URLS like so:
```
nix store info --store "ssh-ng://localhost:22" --json
```
Previously this failed with: `error: failed to start SSH connection to 'localhost:22'`,
because the code did not distinguish the port from the hostname. This
patch remedies that problem by introducing a ParsedURL::Authority type
for working with parsed authority components of URIs.
Now that the URL parsing code is less ad-hoc we can
add more long-awaited fixes for specifying SSH connection
ports in store URIs.
Builds upon the work from bd1d2d1041.
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems>
Make it separate from Hash, since other things can be base-encoded too.
This isn't really needed for Nix, but it makes the code easier to read
e.g. for someone reimplementing this stuff in a different language. (Of
course, Base16/Base64 should be gotten off-the-shelf, but now the hash
code, which is more bespoke, is less cluttered with the parts that would
be from some library.)
Many reimplementations of "Nix32" and our hash type already exist, so
this cleanup is coming years too late, but I say better late than never
/ it is always good to nudge the code in the direction of being a
"living spec".
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
This keeps things fast by making the function inline, but also prevents
people from having to know about the `0xFF` implementation detail
directly, instead making one go through a `std::optional` (which could be
fused away with a sufficiently smart compiler).
Additionally, the base "nix32" implementation is moved to its own header
file pair, as it is logically distinct and prior to the `Hash` data
type. It would probably be nice to do this with all the hash format
implementations.
Some binary caches (incorrectly) use this header to indicate lack of
compression, inspired by the valid "identity" token in the
"Accept-Encoding" header.
The changes include:
* Defining nix32Chars as a constexpr char[].
* Adding a constexpr std::array<unsigned char, 256> (reverseNix32Map) to map characters to their base-32 digit values at compile time.
* Replacing the slow character search loop with a direct lookup using reverseNix32Map.
* Removing std::once_flag/isBase32 logic in references.cc in favor of reverseNix32Map
Signed-off-by: Alexander V. Nikolaev <avn@avnik.info>
GCC doesn't really benefit as much as Clang does from
using precompiled headers. Another aspect to consider is that
clangd doesn't really like GCC's PCH flags in the compilation database,
so GCC based devshells would continue to work with clangd.
This also has the slight advantage of ensuring that our includes are in
order, since we build with both Clang and GCC.
SHA-256 is Git's next hash algorithm. The world is still basically stuck
on SHA-1 with git, but shouldn't be. We can at least do our part to get
ready.
On the C++ implementation side, only a little bit of generalization was
needed, and that was fairly straight-forward. The tests (unit and
system) were actually bigger, and care was taken to make sure they were
all cover both algorithms equally.
Boost.URL is a significantly more RFC-compliant parser
than what libutil currently has a bundle of incomprehensible
regexes.
One aspect of this change is that RFC4007 ZoneId IPv6 literals
are represented in URIs according to RFC6874 [1].
Previously they were represented naively like so: [fe80::818c:da4d:8975:415c\%enp0s25].
This is not entirely correct, because the percent itself has to be pct-encoded:
> "%" is always treated as
an escape character in a URI, so, according to the established URI
syntax [RFC3986] any occurrences of literal "%" symbols in a URI MUST
be percent-encoded and represented in the form "%25". Thus, the
scoped address fe80::a%en1 would appear in a URI as
http://[fe80::a%25en1].
[1]: https://datatracker.ietf.org/doc/html/rfc6874
Co-authored-by: Jörg Thalheim <joerg@thalheim.io>
The myriad of hand-rolled URL parsing and validation code
is a constant source of problems. Regexes are not a great way
of writing parsers and there's a history of getting them wrong.
Boost.URL is a good library we can outsource most of the heavy
lifting to.
* It is tough to contribute to a project that doesn't use a formatter,
* It is extra hard to contribute to a project which has configured the formatter, but ignores it for some files
* Code formatting makes it harder to hide obscure / weird bugs by accident or on purpose,
Let's rip the bandaid off?
Note that PRs currently in flight should be able to be merged relatively easily by applying `clang-format` to their tip prior to merge.
Previous use of symlink_status() always translated into a stat call, leading
to huge performance penalties for by-name-overlay in nixpkgs. The comment
below references the possible caching, but that seemed to be erroneous, since
the correct way to make use of the caching API is by calling a bunch of `is_*`
functions [1]. For example, here's how libstdc++ does that [2], [3].
This translates to great nixpkgs eval performance improvements:
```
Benchmark 1: GC_INITIAL_HEAP_SIZE=4G result/bin/nix-instantiate ../nixpkgs -A hello --readonly-mode
Time (mean ± σ): 186.7 ms ± 6.7 ms [User: 121.3 ms, System: 64.9 ms]
Range (min … max): 179.4 ms … 201.6 ms 16 runs
Benchmark 2: GC_INITIAL_HEAP_SIZE=4G nix-instantiate ../nixpkgs -A hello --readonly-mode
Time (mean ± σ): 230.6 ms ± 5.0 ms [User: 126.9 ms, System: 103.1 ms]
Range (min … max): 225.1 ms … 241.4 ms 13 runs
```
[1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0317r1.html
[2]: 8ea555b7b4/libstdc%2B%2B-v3/include/bits/fs_dir.h (L341-L348)
[3]: 8ea555b7b4/libstdc%2B%2B-v3/include/bits/fs_dir.h (L161-L163)