Progress on #13570.
If we depend on the store dir, our JSON serializers/deserializers take
extra arguements, and that interfaces with the likes of various
frameworks for associating these with types (e.g. nlohmann in C++, Serde
in Rust, and Aeson in Haskell).
Fix#11897
As described in the issue, this makes for a simpler and much more
intuitive notion of a realisation key. This is better for pedagogy, and
interoperability between more tools.
The way the issue was written was that we would switch to only having
shallow realisations first, and then do this. But going to only shallow
realisations is more complex change, and it turns out we weren't even
testing for the benefits that derivation hashes (modulo FODs) provided
in the deep realisation case, so I now just want to do this first.
Doing this gets the binary cache data structures in order, which will
unblock the Hydra fixed-output-derivation tracking work. I don't want to
delay that work while I figure out the changes needed for
shallow-realisations only.
This reverts commit bab1cda0e6.
The s3:ListBucket permission is required for read operations on S3
binary caches, not just for writes. Without this permission, users get
"Access Denied" errors when running nix-build.
Extract the path-based compression method determination logic into a
protected method that returns std::optional<std::string>. This allows
subclasses to reuse the logic and makes the semantics clearer (nullopt
means no compression, not empty string).
This prepares for S3BinaryCacheStore to apply the same compression
rules when implementing multipart uploads.
At least one user has probably used `file+git://` when they mean `git+file://`, maybe thinking of it as "a file-based git repository". This adds a specific error message to hint at the correct URL scheme format and may save some users from resorting to `path:///` and copying an entire repo.
Adds a comprehensive test to verify that `nix-prefetch-url` correctly
handles S3 URLs with query parameters (e.g., custom endpoints and regions).
Previously, nix-prefetch-url would fail with "invalid store
path" errors when given S3 URLs with query parameters like
`?endpoint=http://server:9000®ion=eu-west-1`, because it incorrectly
extracted the filename from the query parameters instead of the path.
Previously, `prefetchFile()` used `baseNameOf()` directly on the URL string
to extract the filename. This caused issues with URLs containing query
parameters that include slashes, such as S3 URLs with custom endpoints:
```
s3://bucket/file.txt?endpoint=http://server:9000
```
The `baseNameOf()` function naively searches for the rightmost `/` in the
entire string, which would find the `/` in `http://server:9000` and extract
`server:9000®ion=...` as the filename. This resulted in invalid store
path names containing illegal characters like `:`.
This commit fixes the issue by:
1. Adding a `VerbatimURL::lastPathSegment()` method that extracts the last
non-empty path segment from a URL, using `pathSegments(true)` to filter
empty segments
2. Changing `prefetchFile()` to accept `const VerbatimURL &` and use the new
`lastPathSegment()` method instead of manual path parsing
3. Adding early validation with `checkName()` to fail quickly on invalid
filenames
4. Maintains backward compatibility by falling back to `baseNameOf()` for
unparsable `VerbatimURL`s