CURL is not very strict about validation of URLs passed to it. We
should reflect this in our handling of URLs that we get from the user
in <nix/fetchurl.nix> or builtins.fetchurl. ValidURL was an attempt to
rectify this, but it turned out to be too strict. The only good way to
resolve this is to pass (in some cases) the user-provided string verbatim
to CURL. Other usages in libfetchers still benefit from using structured
ParsedURL and validation though.
nix store prefetch-file --name foo 'https://cdn.skypack.dev/big.js@^5.2.2'
error: 'https://cdn.skypack.dev/big.js@^5.2.2' is not a valid URL: leftover
The URL should not be normalized before handing it off to cURL, because
builtin fetchers like fetchTarball/fetchurl are expected to work with
arbitrary URLs, that might not be RFC3986 compliant. For those cases
Nix should not normalize URLs, though validation is fine. ParseURL and
cURL are supposed to match the set of acceptable URLs, since they implement
the same RFC.
I think it is bad for these reasons when `tests/` contains a mix of
functional and integration tests
- Concepts is harder to understand, the documentation makes a good
unit vs functional vs integration distinction, but when the
integration tests are just two subdirs within `tests/` this is not
clear.
- Source filtering in the `flake.nix` is more complex. We need to
filter out some of the dirs from `tests/`, rather than simply pick
the dirs we want and take all of them. This is a good sign the
structure of what we are trying to do is not matching the structure
of the files.
With this change we have a clean:
```shell-session
$ git show 'HEAD:tests'
tree HEAD:tests
functional/
installer/
nixos/
```