In particular
- Remove `get`, it is redundant with `valueAt` and the `get` in
`util.hh`.
- Remove `nullableValueAt`. It is morally just the function composition
`getNullable . valueAt`, not an orthogonal combinator like the others.
- `optionalValueAt` return a pointer, not `std::optional`. This also
expresses optionality, but without creating a needless copy. This
brings it in line with the other combinators which also return
references.
- Delete `valueAt` and `optionalValueAt` taking the map by value, as we
did for `get` in 408c09a120, which
prevents bugs / unnecessary copies.
`adl_serializer<DerivationOptions::OutputChecks>::from_json` was the one
use of `getNullable`. I give it a little static function for the
ultimate creation of a `std::optional` it does need to do (after
switching it to using `getNullable . valueAt`. That could go in
`json-utils.hh` eventually, but I didn't bother for now since only one
things needs it.
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
S3 buckets support object versioning to prevent unexpected changes,
but Nix previously lacked the ability to fetch specific versions of
S3 objects. This adds support for a `versionId` query parameter in S3
URLs, enabling users to pin to specific object versions:
```
s3://bucket/key?region=us-east-1&versionId=abc123
```
Move HttpBinaryCacheStore class from .cc file to header to enable
inheritance by S3BinaryCacheStore. Create S3BinaryCacheStore class that
overrides upsertFile() to implement multipart upload logic.
Add a sizeHint parameter to BinaryCacheStore::upsertFile() to enable
size-based upload decisions in implementations. This lays the groundwork
for reintroducing S3 multipart upload support.
Add support for HTTP DELETE requests to FileTransfer infrastructure:
This enables S3 multipart upload abort functionality via DELETE requests
to S3 endpoints.
addToStore(): Don't parse the NAR
* StringSource: Implement skip()
This is slightly faster than doing a read() into a buffer just to
discard the data.
* LocalStore::addToStore(): Skip unnecessary NARs rather than parsing them
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
The s3:ListBucket permission is required for read operations on S3
binary caches, not just for writes. Without this permission, users get
"Access Denied" errors when running nix-build.
Extract the path-based compression method determination logic into a
protected method that returns std::optional<std::string>. This allows
subclasses to reuse the logic and makes the semantics clearer (nullopt
means no compression, not empty string).
This prepares for S3BinaryCacheStore to apply the same compression
rules when implementing multipart uploads.
Fix POST requests with data to use the correct curl option for specifying
body size. Previously used CURLOPT_INFILESIZE_LARGE for both POST and PUT,
but POST requires CURLOPT_POSTFIELDSIZE_LARGE.
This caused POST request bodies to not be sent correctly, manifesting as
S3 multipart CompleteMultipartUpload requests failing with "You must
specify at least one part" even though the XML body contained valid parts.
When Nix's SQLite narinfo cache indicates a NAR exists, but the NAR
has been garbage collected from the binary cache, Nix displays error
messages even though the operation succeeds via fallback. This is
misleading because the cached narinfo is simply outdated.
This changes SubstituteGone exceptions to produce warnings instead of
errors, accurately reflecting that this is an expected cache coherency
issue, not an actual failure.
Fixes#11411🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Old code would do very much incorrect reentrancy crimes (trying to do an
erase inside the emplace callback). This would fail miserably with an assertion
in Boost:
terminating due to unexpected unrecoverable internal error: Assertion '(!find(px))&&("reentrancy not allowed")' failed in boost::unordered::detail::foa::entry_trace::entry_trace(const void *) at include/boost/unordered/detail/foa/reentrancy_check.hpp:33
This is trivially reproduced by using any S3 URL with a non-empty profile:
nix-prefetch-url "s3://happy/crash?profile=default"
This slightly improves the logs situation by including the region/profile/endpoint
in the logs when S3 store references get printed. Instead of:
copying path '/nix/store/lxnp9cs4cfh2g9r2bs4z7gwwz9kdj2r9-test-package-c' to 's3://bucketname'...
This now includes:
copying path '/nix/store/lxnp9cs4cfh2g9r2bs4z7gwwz9kdj2r9-test-package-c' to 's3://bucketname?endpoint=http://server:9000®ion=eu-west-1'...
We now unconditionally compile support for s3:// URLs and stores
without authentication. The whole curl version check can be greatly
simplified by the previous commit, which bumps the minimum required curl
version.
This version has been released a long time ago in 2021 and it's doubtful
that anybody actually uses it still, since it's full of vulnerabilities [^]
[^]: https://curl.se/docs/vuln-7.75.0.html
I realized that we can actually do this thing, even though it is not
what nlohmann expects at all, because the extra parameter has a default
argument so nlohmann doesn't need to care. Sneaky!
Since 3c610df550 this resulted in `getting status of`
errors on paths inside the chroot if a path was already valid. Careful inspection
of the logic shows that if buildMode != bmCheck actualPath gets reassigned to
store.toRealPath(finalDestPath). The only branch that cares about actualPath is
the buildMode == bmCheck case, which doesn't lead to optimisePath anyway.
Instead of the cryptic:
> error: Failed to resolve AWS credentials: error code 6153`
We now get more legible:
> error: AWS authentication error: 'Valid credentials could not be sourced by the IMDS provider' (6153)
This makes it so we don't need to rely on global variables and hacky destructors to
clean up another global variable. Just putting it in the correct order in the class
is more than enough.
This partially reverts commit 5e46df973f,
partially reversing changes made to
8c789db05b.
We do this because Hydra, while using the newer version of the protocol,
still uses this command, even though Nix (as a client) doesn't use it.
On that basis, we don't want to remove it (or consider it only part of
the older versions of the protocol) until Hydra no longer uses the
Legacy SSH Protocol.
Realisations are conceptually key-value pairs, mapping `DrvOutputs` (the
key) to information about that derivation output.
This separate the value type, which will be useful in maps, etc., where
we don't want to denormalize by including the key twice.
This matches similar changes for existing types:
| keyed | unkeyed |
|--------------------|------------------------|
| `ValidPathInfo` | `UnkeyedValidPathInfo` |
| `KeyedBuildResult` | `BuildResult` |
| `Realisation` | `UnkeyedRealisation` |
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
The macro now accurately reflects its purpose: gating only AWS
authentication code, not all S3 functionality. S3 URL parsing, store
configuration, and public bucket access work regardless of this flag.
This rename clarifies that:
- S3 support is always available (URL parsing, store registration)
- Only AWS credential resolution requires the flag
- The flag controls AWS CRT SDK dependency, not S3 protocol support
Move S3 URL parsing, store configuration, and public bucket support
outside of NIX_WITH_S3_SUPPORT guards. Only AWS credential resolution
remains gated, allowing builds with withAWS = false to:
- Parse s3:// URLs
- Register S3 store types
- Access public S3 buckets (via HTTPS conversion)
- Use S3-compatible services without authentication
The setupForS3() function now always performs URL conversion, with
authentication code conditionally compiled based on NIX_WITH_S3_SUPPORT.
The aws-creds.cc file (only code using AWS CRT SDK) is now conditionally
compiled by meson.