In particular
- Remove `get`, it is redundant with `valueAt` and the `get` in
`util.hh`.
- Remove `nullableValueAt`. It is morally just the function composition
`getNullable . valueAt`, not an orthogonal combinator like the others.
- `optionalValueAt` return a pointer, not `std::optional`. This also
expresses optionality, but without creating a needless copy. This
brings it in line with the other combinators which also return
references.
- Delete `valueAt` and `optionalValueAt` taking the map by value, as we
did for `get` in 408c09a120, which
prevents bugs / unnecessary copies.
`adl_serializer<DerivationOptions::OutputChecks>::from_json` was the one
use of `getNullable`. I give it a little static function for the
ultimate creation of a `std::optional` it does need to do (after
switching it to using `getNullable . valueAt`. That could go in
`json-utils.hh` eventually, but I didn't bother for now since only one
things needs it.
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
S3 buckets support object versioning to prevent unexpected changes,
but Nix previously lacked the ability to fetch specific versions of
S3 objects. This adds support for a `versionId` query parameter in S3
URLs, enabling users to pin to specific object versions:
```
s3://bucket/key?region=us-east-1&versionId=abc123
```
Move HttpBinaryCacheStore class from .cc file to header to enable
inheritance by S3BinaryCacheStore. Create S3BinaryCacheStore class that
overrides upsertFile() to implement multipart upload logic.
Add a sizeHint parameter to BinaryCacheStore::upsertFile() to enable
size-based upload decisions in implementations. This lays the groundwork
for reintroducing S3 multipart upload support.
Add support for HTTP DELETE requests to FileTransfer infrastructure:
This enables S3 multipart upload abort functionality via DELETE requests
to S3 endpoints.
addToStore(): Don't parse the NAR
* StringSource: Implement skip()
This is slightly faster than doing a read() into a buffer just to
discard the data.
* LocalStore::addToStore(): Skip unnecessary NARs rather than parsing them
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
The s3:ListBucket permission is required for read operations on S3
binary caches, not just for writes. Without this permission, users get
"Access Denied" errors when running nix-build.
Extract the path-based compression method determination logic into a
protected method that returns std::optional<std::string>. This allows
subclasses to reuse the logic and makes the semantics clearer (nullopt
means no compression, not empty string).
This prepares for S3BinaryCacheStore to apply the same compression
rules when implementing multipart uploads.
Fix POST requests with data to use the correct curl option for specifying
body size. Previously used CURLOPT_INFILESIZE_LARGE for both POST and PUT,
but POST requires CURLOPT_POSTFIELDSIZE_LARGE.
This caused POST request bodies to not be sent correctly, manifesting as
S3 multipart CompleteMultipartUpload requests failing with "You must
specify at least one part" even though the XML body contained valid parts.
When Nix's SQLite narinfo cache indicates a NAR exists, but the NAR
has been garbage collected from the binary cache, Nix displays error
messages even though the operation succeeds via fallback. This is
misleading because the cached narinfo is simply outdated.
This changes SubstituteGone exceptions to produce warnings instead of
errors, accurately reflecting that this is an expected cache coherency
issue, not an actual failure.
Fixes#11411🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Old code would do very much incorrect reentrancy crimes (trying to do an
erase inside the emplace callback). This would fail miserably with an assertion
in Boost:
terminating due to unexpected unrecoverable internal error: Assertion '(!find(px))&&("reentrancy not allowed")' failed in boost::unordered::detail::foa::entry_trace::entry_trace(const void *) at include/boost/unordered/detail/foa/reentrancy_check.hpp:33
This is trivially reproduced by using any S3 URL with a non-empty profile:
nix-prefetch-url "s3://happy/crash?profile=default"
This slightly improves the logs situation by including the region/profile/endpoint
in the logs when S3 store references get printed. Instead of:
copying path '/nix/store/lxnp9cs4cfh2g9r2bs4z7gwwz9kdj2r9-test-package-c' to 's3://bucketname'...
This now includes:
copying path '/nix/store/lxnp9cs4cfh2g9r2bs4z7gwwz9kdj2r9-test-package-c' to 's3://bucketname?endpoint=http://server:9000®ion=eu-west-1'...
We now unconditionally compile support for s3:// URLs and stores
without authentication. The whole curl version check can be greatly
simplified by the previous commit, which bumps the minimum required curl
version.
This version has been released a long time ago in 2021 and it's doubtful
that anybody actually uses it still, since it's full of vulnerabilities [^]
[^]: https://curl.se/docs/vuln-7.75.0.html
I realized that we can actually do this thing, even though it is not
what nlohmann expects at all, because the extra parameter has a default
argument so nlohmann doesn't need to care. Sneaky!
Since 3c610df550 this resulted in `getting status of`
errors on paths inside the chroot if a path was already valid. Careful inspection
of the logic shows that if buildMode != bmCheck actualPath gets reassigned to
store.toRealPath(finalDestPath). The only branch that cares about actualPath is
the buildMode == bmCheck case, which doesn't lead to optimisePath anyway.