addToStore(): Don't parse the NAR
* StringSource: Implement skip()
This is slightly faster than doing a read() into a buffer just to
discard the data.
* LocalStore::addToStore(): Skip unnecessary NARs rather than parsing them
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
A few changes had cropped up with `_NIX_TEST_ACCEPT=1`:
1. Blake hashing test JSON had a different indentation
2. Store URI had improper non-quoted spaces
(1) was is just fixed, as we trust nlohmann JSON to parse JSON
correctly, regardless of whitespace.
For (2), the existing URL was made a read-only test, since we very much
wish to continue parsing such invalid URLs directly. And then the
original read/write test was updated to properly percent-encode the
space, as the normal form should be.
Since 2.32, nix now needs boost 1.87 or later to build,
due to using unordered::concurrent_flat_map try_emplace_and_cvisit
../src/libexpr/eval.cc: In member function ‘void nix::EvalState::evalFile(const nix::SourcePath&, nix::Value&, bool)’:
../src/libexpr/eval.cc:1096:20: error: ‘class boost::unordered::concurrent_flat_map<nix::SourcePath, nix::Value*, std::hash<nix::SourcePath>, std::equal_to<nix::SourcePath>, traceable_allocator<std::pair<const nix::SourcePath, nix::Value*> > >’ has no member named ‘try_emplace_and_cvisit’; did you mean ‘try_emplace_or_cvisit’?
1096 | fileEvalCache->try_emplace_and_cvisit(
| ^~~~~~~~~~~~~~~~~~~~~~
| try_emplace_or_cvisit
See 834580b539
The s3:ListBucket permission is required for read operations on S3
binary caches, not just for writes. Without this permission, users get
"Access Denied" errors when running nix-build.
Extract the path-based compression method determination logic into a
protected method that returns std::optional<std::string>. This allows
subclasses to reuse the logic and makes the semantics clearer (nullopt
means no compression, not empty string).
This prepares for S3BinaryCacheStore to apply the same compression
rules when implementing multipart uploads.
Fix POST requests with data to use the correct curl option for specifying
body size. Previously used CURLOPT_INFILESIZE_LARGE for both POST and PUT,
but POST requires CURLOPT_POSTFIELDSIZE_LARGE.
This caused POST request bodies to not be sent correctly, manifesting as
S3 multipart CompleteMultipartUpload requests failing with "You must
specify at least one part" even though the XML body contained valid parts.
When Nix's SQLite narinfo cache indicates a NAR exists, but the NAR
has been garbage collected from the binary cache, Nix displays error
messages even though the operation succeeds via fallback. This is
misleading because the cached narinfo is simply outdated.
This changes SubstituteGone exceptions to produce warnings instead of
errors, accurately reflecting that this is an expected cache coherency
issue, not an actual failure.
Fixes#11411🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
At least one user has probably used `file+git://` when they mean `git+file://`, maybe thinking of it as "a file-based git repository". This adds a specific error message to hint at the correct URL scheme format and may save some users from resorting to `path:///` and copying an entire repo.
Adds a comprehensive test to verify that `nix-prefetch-url` correctly
handles S3 URLs with query parameters (e.g., custom endpoints and regions).
Previously, nix-prefetch-url would fail with "invalid store
path" errors when given S3 URLs with query parameters like
`?endpoint=http://server:9000®ion=eu-west-1`, because it incorrectly
extracted the filename from the query parameters instead of the path.
Previously, `prefetchFile()` used `baseNameOf()` directly on the URL string
to extract the filename. This caused issues with URLs containing query
parameters that include slashes, such as S3 URLs with custom endpoints:
```
s3://bucket/file.txt?endpoint=http://server:9000
```
The `baseNameOf()` function naively searches for the rightmost `/` in the
entire string, which would find the `/` in `http://server:9000` and extract
`server:9000®ion=...` as the filename. This resulted in invalid store
path names containing illegal characters like `:`.
This commit fixes the issue by:
1. Adding a `VerbatimURL::lastPathSegment()` method that extracts the last
non-empty path segment from a URL, using `pathSegments(true)` to filter
empty segments
2. Changing `prefetchFile()` to accept `const VerbatimURL &` and use the new
`lastPathSegment()` method instead of manual path parsing
3. Adding early validation with `checkName()` to fail quickly on invalid
filenames
4. Maintains backward compatibility by falling back to `baseNameOf()` for
unparsable `VerbatimURL`s
Old code would do very much incorrect reentrancy crimes (trying to do an
erase inside the emplace callback). This would fail miserably with an assertion
in Boost:
terminating due to unexpected unrecoverable internal error: Assertion '(!find(px))&&("reentrancy not allowed")' failed in boost::unordered::detail::foa::entry_trace::entry_trace(const void *) at include/boost/unordered/detail/foa/reentrancy_check.hpp:33
This is trivially reproduced by using any S3 URL with a non-empty profile:
nix-prefetch-url "s3://happy/crash?profile=default"